Incremental Backup using Veeam Backup Solutions fails.

Solution In Progress - Updated -

Environment

  • Red Hat Virtualization Manager (RHV-M) - 4.4

Issue

  • Incremental Backup using Veeam Backup Solutions fails.
MY_VM:  
Unable to create incremental backup: Cannot backup VM. Checkpoint ID 4bac2a13-ff23-41a7-a717-66281f7678f4 doesn't exist. 
  • Full Backup completes without any error.
  • Virtual Machine has two disks, the OS disk is supplied by the vendor and imported into the VM.

Resolution

  • Incremental backup of the VM is failing due to one of the disks having compat 0.10 version.

  • Workaround is to create a copy of MY_VM_Disk1 disk to MY_VM_Disk1-new and reattach to the VM.
    The new disk image should have a compat 1.1.

    1. Makesure the VM contains no snapshots.

    2. Shutdown the VM.

    3. From the RHV-M UI/Webadmin -> Storage -> Disks Tab.
      Select the disk MY_VM_Disk1 and make a copy of the disk with alias MY_VM_Disk1-new

    4. Once the copy completes, attach MY_VM_Disk1-new to the VM.
      From the RHV-M UI/Webadmin, select the VM and click on the Disks Tab
      Deactivate the current MY_VM_Disk1 and Click on Remove.
      Please DO NOT check the box for "Remove permanently"

      After the disk MY_VM_Disk1 is removed, the attach the MY_VM_Disk1-new disk and check the OS flag.

    5. Once MY_VM_Disk1-new is attached, click on the Edit to verify that incremental box is check.
      Then start the VM and verify the VM operation.

    6. Create a full backup, followed by a incremental backup.

Diagnostic Steps

  • According to RHV-M engine database, this VM has two disks.
     disk_alias     |              image_guid              |            image_group_id            
--------------------+--------------------------------------+--------------------------------------
 MY_VM_Disk1        | f1396aed-ad60-4774-839f-1de04a16f165 | 03214e96-8ac1-419c-80cf-97754765c039
 MY_VM_Disk2        | 5567c9dd-a582-4e33-929c-25f093acdefd | b85d2797-f08c-4d22-b3c3-ee75c4061d0c
  • The checkpoint_id for both disk is 4bac2a13-ff23-41a7-a717-66281f7678f4.
/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select * from vm_checkpoint_disk_map where disk_id IN ( '03214e96-8ac1-419c-80cf-97754765c039', 'b85d2797-f08c-4d22-b3c3-ee75c4061d0c');
            checkpoint_id             |               disk_id                
--------------------------------------+--------------------------------------
 4bac2a13-ff23-41a7-a717-66281f7678f4 | 03214e96-8ac1-419c-80cf-97754765c039
 4bac2a13-ff23-41a7-a717-66281f7678f4 | b85d2797-f08c-4d22-b3c3-ee75c4061d0c
  • Checking the Engine logs, the bitmap 4bac2a13-ff23-41a7-a717-66281f7678f4 is not found for disk id 03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165
2024-07-25 16:08:17,521+01 WARN  [org.ovirt.engine.core.vdsbroker.vdsbroker.StartNbdServerVDSCommand] (default task-58262) [d6e5330e-0cdc-4dc5-979e-70ab2bb6089e] Unexpected return value: Status [code=947, message=Bitmap does not exist: "{'reason': 'Bitmap does not exist in /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165', 'bitmap': '4bac2a13-ff23-41a7-a717-66281f7678f4'}"]

2024-07-25 16:08:17,521+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.StartNbdServerVDSCommand] (default task-58262) [d6e5330e-0cdc-4dc5-979e-70ab2bb6089e] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.StartNbdServerVDSCommand' return value 'NbdServerURLReturn:{status='Status [code=947, message=Bitmap does not exist: "{'reason': 'Bitmap does not exist in /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165', 'bitmap': '4bac2a13-ff23-41a7-a717-66281f7678f4'}"]'}'
2024-07-25 16:08:17,521+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.StartNbdServerVDSCommand] (default task-58262) [d6e5330e-0cdc-4dc5-979e-70ab2bb6089e] Command 'StartNbdServerVDSCommand(HostName = baykok, NbdServerVDSParameters:{hostId='ec83d6a2-1593-437c-84a7-2642a6743eda', serverId='51ccad5b-3c6a-4971-95bf-12d93ff2cf13', storageDomainId='dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea', imageId='03214e96-8ac1-419c-80cf-97754765c039', volumeId='f1396aed-ad60-4774-839f-1de04a16f165', readonly='true', discard='true', detectZeroes='true', backingChain='true', bitmap='4bac2a13-ff23-41a7-a717-66281f7678f4'})' execution failed: VDSGenericException: VDSErrorException: Failed to StartNbdServerVDS, error = Bitmap does not exist: "{'reason': 'Bitmap does not exist in /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165', 'bitmap': '4bac2a13-ff23-41a7-a717-66281f7678f4'}", code = 947

2024-07-25 16:08:17,521+01 ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-58262) [d6e5330e-0cdc-4dc5-979e-70ab2bb6089e] Failed to start NBD server for image transfer '51ccad5b-3c6a-4971-95bf-12d93ff2cf13': {}: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to StartNbdServerVDS, error = Bitmap does not exist: "{'reason': 'Bitmap does not exist in /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165', 'bitmap': '4bac2a13-ff23-41a7-a717-66281f7678f4'}", code = 947 (Failed with error unexpected and code 16)
Caused by: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to StartNbdServerVDS, error = Bitmap does not exist: "{'reason': 'Bitmap does not exist in /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165', 'bitmap': '4bac2a13-ff23-41a7-a717-66281f7678f4'}", code = 947
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2121)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.startImageTransferSession(TransferDiskImageCommand.java:1084)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleImageIsReadyForTransfer(TransferDiskImageCommand.java:681)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeCommand(TransferDiskImageCommand.java:518)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1174)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1332)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2010)
        at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:140)
  • Checking the qemu-img info, we see the compat version is 0.10 for the first disk (03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165) and not bitmap listed.
# qemu-img info -U --backing-chain /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165
image: /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/03214e96-8ac1-419c-80cf-97754765c039/f1396aed-ad60-4774-839f-1de04a16f165
file format: qcow2
virtual size: 4 GiB (4300210176 bytes)
disk size: 332 MiB
cluster_size: 4096
Format specific information:
    compat: 0.10  <<<=======================
    compression type: zlib
    refcount bits: 16
  • The qemu-img info of the second disk (b85d2797-f08c-4d22-b3c3-ee75c4061d0c/5567c9dd-a582-4e33-929c-25f093acdefd) is compat 1.1 version and has bitmap listed.
# qemu-img info -U --backing-chain /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/b85d2797-f08c-4d22-b3c3-ee75c4061d0c/5567c9dd-a582-4e33-929c-25f093acdefd
image: /rhev/data-center/mnt/Xx.XXX.XXX.XXX/dc9f94d8-a0d5-4868-8bf8-88ab7eb896ea/images/b85d2797-f08c-4d22-b3c3-ee75c4061d0c/5567c9dd-a582-4e33-929c-25f093acdefd
file format: qcow2
virtual size: 500 GiB (536870912000 bytes)
disk size: 31.9 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    bitmaps:
        [0]:
            flags:
                [0]: in-use
                [1]: auto
            name: b442875b-198d-457a-8cc7-8af34e86dc03
            granularity: 65536
        [1]:
            flags:
                [0]: in-use
                [1]: auto
            name: 981da146-484c-41f7-b2e1-6d02bcc3b7f7
            granularity: 65536
        [2]:
            flags:
                [0]: in-use
                [1]: auto
            name: 2bb2d4f3-b0e6-4c6a-b0a3-8f45d23e299c
            granularity: 65536
        [3]:
            flags:
                [0]: in-use
                [1]: auto
            name: 3233a934-a64d-401e-8e97-c0139a47bda3
            granularity: 65536
        [4]:
            flags:
                [0]: in-use
                [1]: auto
            name: 3a66f127-e518-4802-951f-63a627672732
            granularity: 65536
        [5]:
            flags:
                [0]: in-use
                [1]: auto
            name: 3ea35bde-0154-495c-9434-58a6741f4fd0
            granularity: 65536
        [6]:
            flags:
                [0]: in-use
                [1]: auto
            name: 4bac2a13-ff23-41a7-a717-66281f7678f4
            granularity: 65536

    refcount bits: 16
    corrupt: false
    extended l2: false

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments