----
Recovery of VMs to new CloudStack instance
// CloudStack Consultancy & CloudStack...
We recently came across an issue where a CloudStack instance was beyond recovery, but the backend XenServer hypervisors were quite happily running user VMs and Virtual Routers. As building a new CloudStack instance was the only option the problem was now how to recover all the user VMs to such a state they can be imported into the new CloudStack instance.
Mapping out user VMs and VHD disks
The first challenge is to map out all the VMs and work out which VMs belong to which CloudStack account, and which VHD disks belong to which VMs. To do this first of all recover the original CloudStack database and then query the vm_instance, service_offering, account and domain tables.
In short we are interested in:
- VM instance ID and names
- VM instance owner account ID and account name
- VM instance owner domain ID and domain name
- VM service offering, which determines the VM CPU / memory spec
- VM volume ID, name, size, path, type (root disk or data) and state for all VM disks – root or data.
At the same time we are not interested in:
- System VMs
- User VMs in state "Expunging", "Expunged", "Destroyed" or "Error". The "Error" state would indicate the VM was not healthy on the original infrastructure.
- VM disk volumes which are in state "Expunged" or "Expunging". Both of these would indicate the VM was in the process of being deleted on the original CloudStack instance.
From a SQL point of view we do this as follows:
SELECT cloud.vm_instance.id as vmid, cloud.vm_instance.name as vmname, cloud.vm_instance.instance_name as vminstname, cloud.vm_instance.display_name as vmdispname, cloud.vm_instance.account_id as vmacctid, cloud.account.account_name as vmacctname, cloud.vm_instance.domain_id as vmdomainid, cloud.domain.name as vmdomname, cloud.vm_instance.service_offering_id as vmofferingid, cloud.service_offering.speed as vmspeed, cloud.service_offering.ram_size as vmmem, cloud.volumes.id as volid, cloud.volumes.name as volname, cloud.volumes.size as volsize, cloud.volumes.path as volpath, cloud.volumes.volume_type as volpath, cloud.volumes.state as volstate FROM cloud.vm_instance right join cloud.service_offering on (cloud.vm_instance.service_offering_id=cloud.service_offering.id) right join cloud.volumes on (cloud.vm_instance.id=cloud.volumes.instance_id) right join cloud.account on (cloud.vm_instance.account_id=cloud.account.id) right join cloud.domain on (cloud.vm_instance.domain_id=cloud.domain.id) where cloud.vm_instance.type='User' and not (cloud.vm_instance.state='Expunging' or cloud.vm_instance.state='Destroyed' or cloud.vm_instance.state='Error') and not (cloud.volumes.state='Expunged' or cloud.volumes.state='Expunging') order by cloud.vm_instance.id;
This will return a list of VMs and disks like the following:
vmid | vmname | vminstname | vmdispname | vmacctid | vmacctname | vmdomainid | vmdomname | vmofferingid | vmspeed | vmmem | volid | volname | volsize | volpath | volpath | volstate |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24 | rootvm1 | i-2-24-VM | rootvm1 | 2 | admin | 1 | ROOT | 1 | 500 | 512 | 30 | ROOT-24 | 21474836480 | 34c8b964-4ecb-4463-9535-40afc0bd2117 | ROOT | Ready |
25 | ppvm2 | i-5-25-VM | ppvm2 | 5 | peterparker | 2 | SPDM Inc | 1 | 500 | 512 | 31 | ROOT-25 | 21474836480 | 10c12a4f-7bf6-45c4-9a4e-c1806c5dd54a | ROOT | Ready |
26 | ppvm3 | i-5-26-VM | ppvm3 | 5 | peterparker | 2 | SPDM Inc | 1 | 500 | 512 | 32 | ROOT-26 | 21474836480 | 7046409c-f1c2-49db-ad33-b2ba03a1c257 | ROOT | Ready |
26 | ppvm3 | i-5-26-VM | ppvm3 | 5 | peterparker | 2 | SPDM Inc | 1 | 500 | 512 | 36 | ppdatavol | 5368709120 | b9a51f4a-3eb4-4d17-a36d-5359333a5d71 | DATADISK | Ready |
This now gives us all the information required to import the VM into the right account once the VHD disk file has been recovered from the original primary storage pool.
Recovering VHD files
Using the information from the database query above we now know that e.g. the VM "ppvm3":
- Is owned by the account "peterparker" in domain "SPDM Inc".
- Used to have 1 vCPU @ 500MHz and 500MB vRAM.
- Had two disks:
- A root disk with ID 7046409c-f1c2-49db-ad33-b2ba03a1c257.
- A data disk with ID b9a51f4a-3eb4-4d17-a36d-5359333a5d71.
If we now check the original primary storage repository we can see these disks:
-rw-r--r-- 1 root root 502782464 Nov 17 12:08 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd -rw-r--r-- 1 root root 13312 Nov 17 11:49 b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd
This should in theory make recovery easy. Unfortunately due to the nature of XenServer VHD disk chains it's not that straight forward. If we tried to import the root VHD disks as a template this would succeed, but as soon as we try to spin up a new VM from this template we get the "insufficient resources" error from CloudStack. If we trace this back in the CloudStack management log or XenServer SMlog we will most likely find an error along the lines of "Got exception SR_BACKEND_FAILURE_65 ; Failed to load VDI". The root cause of this is we have imported a VHD differencing disk, or in common terms a delta or "child" disk. These reference a parent VHD disk – which we so far have not recovered.
To fully recover healthy VHD images we have two options:
- If we have access to the original storage repository from a running XenServer we can use the "xe" command line tools to export each VDI image. This method is preferable as it involves less copy operations and less manual work.
- If we have no access from a running XenServer we can copy the disk images and use the "vhd-util" utility to merge files.
Recovery using XenServer
VHD file export using the built in XenServer tools is relatively straight forward. The "xe vdi-export" tool can be used to export and merge the disk in a single operation. The first step in the process is to map an external storage repository to the XenServer (normally the same repository which is used for upload of the VHD images to CloudStack later on), e.g. an external NFS share.
We now use the vdi-export option as follows:
# xe vdi-export uuid=7046409c-f1c2-49db-ad33-b2ba03a1c257 format=vhd filename=ppvm3root.vhd --progress [|] ######################################################> (100% ETA 00:00:00) Total time: 00:01:12 # xe vdi-export uuid=b9a51f4a-3eb4-4d17-a36d-5359333a5d71 format=vhd filename=ppvm3data.vhd --progress [\] ######################################################> (100% ETA 00:00:00) Total time: 00:00:00 # ll total 43788816 -rw------- 1 root root 12800 Nov 18 2015 ppvm3data.vhd -rw------- 1 root root 1890038784 Nov 18 2015 ppvm3root.vhd
If we now utilise vhd-util to scan the disks we see they are both dynamic disks with no parent:
# vhd-util read -p -n ppvm3root.vhd VHD Footer Summary: ------------------- Cookie : conectix Features : (0x00000002) <RESV> File format version : Major: 1, Minor: 0 Data offset : 512 Timestamp : Sat Jan 1 00:00:00 2000 Creator Application : 'caml' Creator version : Major: 0, Minor: 1 Creator OS : Unknown! Original disk size : 20480 MB (21474836480 Bytes) Current disk size : 20480 MB (21474836480 Bytes) Geometry : Cyl: 41610, Hds: 16, Sctrs: 63 : = 20479 MB (21474754560 Bytes) Disk type : Dynamic hard disk Checksum : 0xffffefb4|0xffffefb4 (Good!) UUID : 4fc66aa3-ad5e-44e6-a4e2-b7e90ae9c192 Saved state : No Hidden : 0 VHD Header Summary: ------------------- Cookie : cxsparse Data offset (unusd) : 18446744073709 Table offset : 2048 Header version : 0x00010000 Max BAT size : 10240 Block size : 2097152 (2 MB) Parent name : Parent UUID : 00000000-0000-0000-0000-000000000000 Parent timestamp : Sat Jan 1 00:00:00 2000 Checksum : 0xfffff44d|0xfffff44d (Good!) # vhd-util read -p -n ppvm3data.vhd VHD Footer Summary: ------------------- Cookie : conectix Features : (0x00000002) <RESV> File format version : Major: 1, Minor: 0 Data offset : 512 Timestamp : Sat Jan 1 00:00:00 2000 Creator Application : 'caml' Creator version : Major: 0, Minor: 1 Creator OS : Unknown! Original disk size : 5120 MB (5368709120 Bytes) Current disk size : 5120 MB (5368709120 Bytes) Geometry : Cyl: 10402, Hds: 16, Sctrs: 63 : = 5119 MB (5368430592 Bytes) Disk type : Dynamic hard disk Checksum : 0xfffff16b|0xfffff16b (Good!) UUID : 03fd60a4-d9d9-44a0-ab5d-3508d0731db7 Saved state : No Hidden : 0 VHD Header Summary: ------------------- Cookie : cxsparse Data offset (unusd) : 18446744073709 Table offset : 2048 Header version : 0x00010000 Max BAT size : 2560 Block size : 2097152 (2 MB) Parent name : Parent UUID : 00000000-0000-0000-0000-000000000000 Parent timestamp : Sat Jan 1 00:00:00 2000 Checksum : 0xfffff46b|0xfffff46b (Good!) # vhd-util scan -f -m'*.vhd' -p vhd=ppvm3data.vhd capacity=5368709120 size=12800 hidden=0 parent=none vhd=ppvm3root.vhd capacity=21474836480 size=1890038784 hidden=0 parent=none
These files are now ready for upload to the new CloudStack instance.
Note: using the Xen API it is also in theory possible to download / upload a VDI image straight from XenServer, using the "export_raw_vdi" API call. This can be achieved using a URL like:
https://<account>:<password>@<XenServer IP or hostname>/export_raw_vdi?vdi=<VDI UUID>&format=vhd
At the moment this method unfortunately doesn't download the VHD file as a sparse disk image, hence the VHD image is downloaded in it's full original disk size, which makes this very space hungry method. It is also a relatively new addition to the Xen API and is marked as experimental. More information can be found on http://xapi-project.github.io/xen-api/snapshots.html.
Recovery using vhd-util
If all we have access to is the original XenServer storage repository we can utilise the "vhd-util" binary which can be downloaded from http://download.cloud.com.s3.amazonaws.com/tools/vhd-util (note this is a slightly different version to the one built in to XenServer).
If we run this with the "read" option we can find out more information about what kind of disk this is and if it has a parent. For the root disk this results in the following information:
# vhd-util read -p -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd VHD Footer Summary: ------------------- Cookie : conectix Features : (0x00000002) &amp;lt;RESV&amp;gt; File format version : Major: 1, Minor: 0 Data offset : 512 Timestamp : Sun Nov 15 23:06:44 2015 Creator Application : 'tap' Creator version : Major: 1, Minor: 3 Creator OS : Unknown! Original disk size : 20480 MB (21474836480 Bytes) Current disk size : 20480 MB (21474836480 Bytes) Geometry : Cyl: 41610, Hds: 16, Sctrs: 63 : = 20479 MB (21474754560 Bytes) Disk type : Differencing hard disk Checksum : 0xffffefe6|0xffffefe6 (Good!) UUID : 2a2cb4fb-1945-4bad-9682-6ea059e64598 Saved state : No Hidden : 0 VHD Header Summary: ------------------- Cookie : cxsparse Data offset (unusd) : 18446744073709 Table offset : 1536 Header version : 0x00010000 Max BAT size : 10240 Block size : 2097152 (2 MB) Parent name : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd Parent UUID : f6f05652-20fa-4f5f-9784-d41734489b32 Parent timestamp : Fri Nov 13 12:16:48 2015 Checksum : 0xffffd82b|0xffffd82b (Good!)
From the above we notice two things about the root disk:
- Disk type : Differencing hard disk
- Parent name : cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd
I.e. the VM root disk is a delta disk which relies on parent VHD disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.
If we run this against the data disk the story is slightly different:
# vhd-util read -p -n b9a51f4a-3eb4-4d17-a36d-5359333a5d71.vhd VHD Footer Summary: ------------------- Cookie : conectix Features : (0x00000002) &amp;lt;RESV&amp;gt; File format version : Major: 1, Minor: 0 Data offset : 512 Timestamp : Mon Nov 16 01:52:51 2015 Creator Application : 'tap' Creator version : Major: 1, Minor: 3 Creator OS : Unknown! Original disk size : 5120 MB (5368709120 Bytes) Current disk size : 5120 MB (5368709120 Bytes) Geometry : Cyl: 10402, Hds: 16, Sctrs: 63 : = 5119 MB (5368430592 Bytes) Disk type : Dynamic hard disk Checksum : 0xfffff158|0xfffff158 (Good!) UUID : 7013e511-b839-4504-ba88-269b2c97394e Saved state : No Hidden : 0 VHD Header Summary: ------------------- Cookie : cxsparse Data offset (unusd) : 18446744073709 Table offset : 1536 Header version : 0x00010000 Max BAT size : 2560 Block size : 2097152 (2 MB) Parent name : Parent UUID : 00000000-0000-0000-0000-000000000000 Parent timestamp : Sat Jan 1 00:00:00 2000 Checksum : 0xfffff46d|0xfffff46d (Good!)
In other words the data disk is showing up with:
- Disk type : Dynamic hard disk
- Parent name : <blank>
This behaviour is typical for VHD disk chains. The root disk is created from an original template file, hence it has a parent disk, whilst the data disk was just created as a raw storage disk, hence has no parent.
Before moving forward with the recovery it is very important to make copies of both the differencing disks and parent disk to a separate location for further processing.
The full recovery of the VM instance root disk relies on the differencing disk being coalesced or merged into the parent disk – but since the parent disk was a template disk it is used by a number of differencing disks the coalesce process will change this parent disk and render any other differencing disks unrecoverable.
Once we have copied the root VHD disk and it's parent disk to a separate location we use the vhd-util "scan" option to verify we have all disks in the disk chain. The"scan" option will show an indented list of disks which gives a tree like view of disks and parent disks.
Please note if the original VM had a number of snapshots there might be more than two disks in the chain. If so use the process above to identify all the differencing disks and download them to the same folder.
# vhd-util scan -f -m'*.vhd' -p vhd=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd capacity=21474836480 size=1758786048 hidden=1 parent=none vhd=7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd capacity=21474836480 size=507038208 hidden=0 parent=cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd
Once all differencing disks have been copied we can now use the vhd-util "coalesce" option to merge the child difference disk(s) into the parent disk:
# ls -l -rw-r--r-- 1 root root 507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd -rw-r--r-- 1 root root 1758786048 Nov 17 12:47 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd # vhd-util coalesce -n 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd # ls -l -rw-r--r-- 1 root root 507038208 Nov 17 12:45 7046409c-f1c2-49db-ad33-b2ba03a1c257.vhd -rw-r--r-- 1 root root 1863848448 Nov 17 13:36 cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd
Note the vhd-util coalesce option has no output. Also note the size change of the parent disk cfe206b7-cacb-4291-a1c0-f248ff53ed4b.vhd.
Now the root disk has been merged it can be uploaded as a template to CloudStack to allow build of the original VM.
Import of VM into new CloudStack instance
We now have all details for the original VM:
- Owner account details
- VM virtual hardware specification
- Merged root disk
- Data disk
The import process is now relatively straight forward. For each VM:
- Ensure the account is created.
- In the context of the account (either via GUI or API):
- Import the root disk as a new template.
- Import the data disk as a new volume.
- Create a new instance from the uploaded template.
- Once the new VM instance is online attach the uploaded data disk to the VM.
In a larger CloudStack estate the above process is obviously both time consuming and resource intensive, but can to a certain degree be automated. As long as the VHD files were healthy to start off with it will however allow for successful recovery of XenServer based VMs between CloudStack instances.
About The Author
Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing and implementing IaaS solutions based on on Apache CloudStack.
----
Shared via my feedly reader
Sent from my iPhone
No comments:
Post a Comment