Kvm incremental backup

From Do you speak Drupalish? Featured Drupal wiki-like documentation
Jump to: navigation, search

Resources

Workflow sample

backup

  • requirements qemu 2.5 on Ubuntu Xenial or proxmox
  • VM named r1
  • Justification

Bitmaps can be safely modified when the VM is paused or halted by using the basic QMP commands. For instance, you might perform the following actions:

  1. Boot the VM in a paused state.
  2. Create a full drive backup of drive0.
  3. Create a new bitmap attached to drive0.
  4. Resume execution of the VM.
  5. Incremental backups are ready to be created.

At this point, the bitmap and drive backup would be correctly in sync, and incremental backups made from this point forward would be correctly aligned to the full drive backup.

This is not particularly useful if we decide we want to start incremental backups after the VM has been running for a while, for which we will need to perform actions such as the following:

  1. Boot the VM and begin execution.
  2. Using a single transaction, perform the following operations:
    • Create bitmap0.
    • Create a full drive backup of drive0.
  3. Incremental backups are now ready to be created.

virsh qemu-monitor-command r1 --pretty '{ "execute": "block-dirty-bitmap-add",
 "arguments": {
 "node": "drive-virtio-disk0",
 "name": "bitmap0",
 "granularity": 131072
 }
}'
{
    "return": {

    },
    "id": "libvirt-21"
}

virsh qemu-monitor-command r1 '{"execute":"query-block"}' 
  • hit again to show dirty-bitmaps
  • false friend device0, correct :"drive-virtio-disk0"
  • If exists, remove it virsh qemu-monitor-command r1 '{"execute":"query-block"}' | grep bitmap0
virsh qemu-monitor-command r1 --pretty '{ "execute": "block-dirty-bitmap-remove",
 "arguments": {
 "node": "drive-virtio-disk0",
 "name": "bitmap0"
 }
}'
virsh qemu-monitor-command r1 --pretty '{ "execute": "transaction",
 "arguments": {
 "actions": [
 {"type": "block-dirty-bitmap-add",
 "data": {"node": "drive-virtio-disk0", "name": "bitmap0"} },
 {"type": "drive-backup",
 "data": {"device": "drive-virtio-disk0",
 "target": "/var/backup/kvm/r1/full.qcow2",
 "sync": "full", "format": "qcow2"} }
 ]
 }
}'
*check with virsh qemu-monitor-command r1 '{"execute":"query-block"}'  | grep bitmap0
  • The VM should be running. It's live backup, otherwise error: Requested operation is not valid: domain is not running
root@ns374227:/var/lib/libvirt/images# sudo virsh qemu-monitor-command r1 '{ "execute": "transaction",
 "arguments": {
 "actions": [
 {"type": "block-dirty-bitmap-add",
 "data": {"node": "drive-virtio-disk0", "name": "bitmap0"} },
 {"type": "drive-backup",
 "data": {"device": "drive-virtio-disk0",
 "target": "/var/backup/kvm/r1/full.qcow2",
 "sync": "full", "format": "qcow2"} }
 ]
 }
}'
{"id":"libvirt-14","error":{"class":"GenericError","desc":"Could not create file: Permission denied"}}
# Some examples of valid values are:
#
#       user = "qemu"   # A user named "qemu"
#       user = "+0"     # Super user (uid=0)
#       user = "100"    # A user named "100" or a user with uid=100
#
user = "root"

# The group for QEMU processes run by the system instance. It can be
# specified in a similar way to user.
group = "root"
  • Other cause could be apparmor, disable temporary. Sep 19 05:04:15 r2 kernel: [ 303.388051] audit: type=1400 audit(1474275855.490:23): apparmor="DENIED" operation="mknod" profile="libvirt-fd299e8b-3d76-4097-85af-0bf4ebd50e6a" name="/var/backup/kvm/r1/full.qcow2" pid=3001 comm="qemu-system-x86" requested_mask="c" denied_mask="c" fsuid=0 ouid=0
    • worked with reboot
qemu-img create -f qcow2 inc.0.qcow2 -b full.qcow2 -F qcow2

Formatting 'inc.0.qcow2', fmt=qcow2 size=21474836480 backing_file=full.qcow2 backing_fmt=qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

  • http://wiki.qemu.org/Documentation/CreateSnapshot#Create_a_snapshot
    • Snapshots in QEMU are images that refer to an original image using Redirect-on-Write [1] to avoid changing the original image. If you want to create a snapshot of an existing image called centos-cleaninstall.img, create a new QCow2 file using the -b flag to indicate a backing file. The new image is now a read/write snapshot of the original image -- any changes to snapshot.img will not be reflected in centos-cleaninstall.img.
SAMPLE DON'T EXECUTE qemu-img create -f qcow2 -b centos-cleaninstall.img snapshot.img
    • At this point, you would run QEMU against snapshot.img. Making any changes to its backing file (centos-cleaninstall.img) will corrupt this snapshot image.
root@ns374227:/var/lib/libvirt/images# ls -lsh
1.4G -rw-r--r-- 1 root root 1.4G Sep  6 23:03 full.qcow2
196K -rw-r--r-- 1 root root 193K Sep  6 23:08 inc.0.qcow2
sudo virsh qemu-monitor-command r1 '
{ "execute": "drive-backup",
 "arguments": {
 "device": "drive-virtio-disk0",
 "bitmap": "bitmap0",
 "target": "/var/backup/kvm/r1/inc.0.qcow2",
 "format": "qcow2",
 "sync": "incremental",
 "mode": "existing"
 }
}'
{"id":"libvirt-13","error":{"class":"GenericError","desc":"Bitmap 'bitmap0' could not be found"}} means the VM was rebooted, need to add again dirty map
  • Full path are required
qemu-img create -f qcow2 inc.1.qcow2 -b inc.0.qcow2 -F qcow2
sudo virsh qemu-monitor-command r1 '
{ "execute": "drive-backup",
 "arguments": {
 "device": "drive-virtio-disk0",
 "bitmap": "bitmap0",
 "target": "/var/backup/kvm/r1/inc.1.qcow2",
 "format": "qcow2",
 "sync": "incremental",
 "mode": "existing"
 }
}'
root@ns374227:/var/lib/libvirt/images# ls -lsh
total 5.3G
2.7G -rw------- 1 root root  21G Sep  6 23:26 b1.img
2.1G -rw-r--r-- 1 root root 2.1G Sep  6 23:14 full.qcow2
604M -rw-r--r-- 1 root root 604M Sep  6 23:18 inc.0.qcow2
 76M -rw-r--r-- 1 root root  76M Sep  6 23:26 inc.1.qcow2

  • virsh qemu-monitor-command b1 '{"execute":"query-block"}'

workflow restore

  • via jsnow@redhat.com John Snow
  • root@s1:/var/backup/kvm/r1# qemu-img convert -f qcow2 -O qcow2 inc.1.qcow2 r1.qcow2
    • this will "convert" the top-most incremental backup from qcow2 to... qcow2, but in so doing will open all of the backing files (inc3, 2, 1, 0, base) and create a composite image that can be used to replace the failed VM image.

apparmor issue

I've managed to get this working by creating /var/lib/libvirt/qemu/channel/target with appropriate ownership:

# mkdir -p /var/lib/libvirt/qemu/channel/target ( exists)
# chown -R libvirt-qemu:kvm /var/lib/libvirt/qemu/channel

and adding the following to the bottom of /etc/apparmor.d/abstractions/libvirt-qemu:

  /var/lib/libvirt/qemu/channel/target/* rw,

(I'm not an apparmor expert, so there may well be a better way of doing this.)
While adding this to /etc/apparmor.d/abstractions/libvirt-qemu certainly is a viable workaround:
  /var/lib/libvirt/qemu/channel/target/* rw,

it is not the proper fix because it breaks guest isolation (guests can access other guests target files). Seems like virt-aa-helper should be adjusted to ascertain the name of the 'target' and update /etc/apparmor.d/libvirt/libvirt-<uuid>.files accordingly.
  • /var/lib/libvirt/qemu/channel/target/* rw works, but is just an workaround
Still perplexing that it did work for me. Perhaps adding "/var/lib/libvirt/qemu/channel/target/${domain-name}**" to the whitelist in the same place in the same patch would fix everyone and still be safe.