In this thread you mentioned moving another VM backup from one host to another. We are interested in moving 1 Large VM from Host1 to Host2 with minimal downtime for the moved VM. (We could use VCenter but would need to shut down the host and copy it over).
Currently we are backing up the VM on Host1. We would like to restore to new Host2. Can this be done while VM is still running on Host1. Then shutdown VM on Host1 and bring up VM on Host2?
Also would it be possible to restore VM to Host2 and make it a clone of the original VM running on Host1 but VM2 on Host2? Again with minimal downtime?
I think the easiest way to to create a VM with the same name and same disk size on Host2, and use the same host id to initialize on Host2:
vertical init host1 s3://mybucket # note that the host id is host1 although this runs on host2
vertical restore vmname -r 1 vmname-flat.vmdk # restore only the disk file
If you take this route then make sure you only do backup on host1 and restore on host2.
Just wanted to clarify… IF I wanted to CLONE and create a new VM from the original VM to another VMHost should I do the following:
Currently have DB2 (Mysql DB SLave replicating from DB1) on HOST1 and my goal is to clone DB2 to HOST2 and have it become DB3.
Create on HOST2 DB3 with Same HD spacing and configuration as DB2.
On HOST2:
vertical init host1 s3://mybucket
vertical restore DB2 -r 1 DB2-flat.vmdk
I think I’m missing something here:
How could I power up and change network configurations to avoid IP conflicts?
Is there a way to also add delta changes from the last backup to restore to the new clone after creating the initial restore?
The steps are correct except that the VM on HOST2 must be named DB2 too. If you don’t want to use the same name then you can restore the disk file to a different path and then copy over the restored disk file.
Also you need to specify the path for the disk file like this:
The network configurations do not matter – you can also change them after the new VM boots up.
Once you have the two machines running side by side you can still back up the original and then apply the latest backup to the clone (which must be shut down for the restore operation).
So I went the route of keeping the same name on HOST2 as I have set it up on HOST1:
This restore command did not work for me:
vertical restore MYDB@vm2 -r 2 MYDB-flat.vmdk
This worked but with error:
vertical restore -r 2 -f MYDB@vm2 MYDB file MYDB-flat.vmdk
ERROR:
Removing all snapshots of MYDB
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/8e/07/8b735c03990ff577b1b14d5b38bd666bd729a5a2a0b0c6d8d3dfc426aaa5.23547951’: No space left on device
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/63/b0/371bc149ee5cc734ac84b38af2ab80d0d17efbf01eb3f0bf589ce5700799.166d3f85’: No space left on device
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/bf/ad/5c6b9efda2910d1a16f196deea04b1f581ec4bc575821a20428c7a303e36.286f05ad’: No space left on device
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/e5/44/d852c8e3cf33eb7b3b9aa5b822bdd7817d7d70a07b59ff605e3ca2fdda10.ba35847b’: No space left on device
Backup MYDB@vm2 at revision 1 has been successfully restored to /vmfs/volumes/DatastoreHDD/MYDB
Total 0 chunks, 0 bytes; 0 new, 0 bytes, 0 downloaded
Total restore time: 00:00:06
What’s weird is I get this message “No space left on device” but no where on any of the hosts is storage space being full is an issue. There is plenty of HD space on all hosts.
Those messages were just warnings. The real problem was that the full path of the disk file wasn’t specified. Run ./vertical list -f to list all files in the backup, and then pass the full path to the store command. In my test case it is vmfs/volumes/datastore1/vm-test/vm-test-flat.vmdk.
It ran out of space because /opt was on a ramdisk with very limited space. This command lists info about all ramdisks:
esxcli system visorfs ramdisk list
You can use the --tmp-dir option of the init command to move the cache directory to a directory under the datastore.
That being said, these messages are indeed very annoying and I’ll fix it in the next release.
Looks like that did the trick. It seems to be working now. I get the following message:
The destination directory ‘/vmfs/volumes/VM2-SSD/MYDB’ does not exist; use ‘/vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD/MYDB’ instead
The volumes are different. I don’t think that would be an issue?
Also the restore (this is the first time)… for roughly 800GB could take hours. Will it be faster if after the initial restore is done I do a second restore on a newer backup?
That could be an issue. Is it possible to create a symlink ‘/vmfs/volumes/VM2-SSD/MYDB’ to make it point to the directory where the disk file for the new VM is? Otherwise, you’re not writing directly to the VM but instead creating a new disk file which must be added to the new VM manually.
Yes, the second restore should be much faster than the initial one.
The standard way would be to create a symlink VM2-SSD that points to DatastoreHDD:
cd /vmfs/volumes
ln -s DatastoreHDD VM2-SSD
However, that doesn’t seem to work on ESXi. The workaround is to make /vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD/MYDB points to the actual VM directory:
mkdir -p /vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD
cd /vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD
ln -s /vmfs/volumes/DatastoreHDD/MYDB MYDB
The backedup DB is around 800GBs. We have a volume that is 1.1 TB. So we created the HOST2 MYDB on that volume and we should have roughly 300GB to spare.
The issue is that the restore is not done and has an error of “No space left on device”. It filled up the 1.1 TB volume (while still performing the restore).
Here is where it was at before failing:
The destination directory ‘/vmfs/volumes/VM2-SSD/MYDB’ does not exist; use ‘/vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD/MYDB’ instead
Downloading ***********************----------------------------------------------------------------------------------------------------------- 12.29MB/s 13:19:18 17.8%
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/2c/23/fab2390f0591ab89c96e8037eba04f91e9c7f109096b5ae4afbb817fa727.b78d39e3’: No space left on device
Downloading 12.29MB/s 13:19:17 17.8%
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/24/45/acb03695995c87f3487c5bb54849ca97bb19d0b96ef8447afcc027a3bcb5.e12b5a17’: No space left on device
Downloading 12.29MB/s 13:19:17 17.8%
Failed to save ‘/opt/verticalbackup/.verticalbackup/cache/chunks/12/3f/fd29242c5fda0f41363ab8f7f4c453a15945f9ddd438c57d77fd086c71db.53753ddc’: No space left on device
Downloading 12.29MB/s 13:19:17 17.8%
Downloading 12.29MB/s 13:19:17 17.8%
Downloading 12.29MB/s 13:19:16 17.8%
Downloading 12.29MB/s 13:19:16 17.8%
Downloading 12.29MB/s 13:19:15 17.8%
Downloading 12.29MB/s 13:19:15 17.8%
Downloading 12.29MB/s 13:19:16 17.8%
…
Downloading 12.65MB/s 10:06:13 35.8%
Downloading 12.65MB/s 10:06:13 35.8%
Downloading 12.65MB/s 10:06:13 35.8%
Downloading 12.65MB/s 10:06:13 35.8%
Failed to write to ‘/vmfs/volumes/DatastoreHDD/MYDB/vmfs/volumes/VM2-SSD/MYDB/MYDB_3-flat.vmdk’: No space left on device
Chunk 06a4e3a01f50ec33ff788069edf6dedcbd13a0d78d1e14aacedd22e5d24118d6 cannot be found in the storage
That was probably because restore wasn’t writing to the same disk file. It looks like the disk file for the new VM is called MYDB-flat.vmdk while it is MYDB_3-flat.vmdk for the original.
You can remove the incomplete MYDB_3-flat.vmdk and temporarily rename MYDB-flat.vmdk to MYDB_3-flat.vmdk before the restore, and change back to MYDB-flat.vmdk after it is done.
Or you can take the other approach of restoring the vm to a directory under the datastore:
vertical init host2 s3://mybucket # now you can use host2 as the host id
mkdir -p /vmfs/volumes/DatastoreHDD/restore
vertical restore /vmfs/volumes/DatastoreHDD/restore -r 1 --restore-from MYDB@host1
You can then create the new VM from the disk file under /vmfs/volumes/DatastoreHDD/restore.
Of course, to go this route you need to first delete the MYDB vm on host2 to free up space.
The restore ran for a while. If I want to refresh the restore with a newer backup how can I “refresh” the data without starting over the process? or update the delta change?
You can run the vmsvc/snapshot.create command manually to see if it gives you the same error. If so, do a snapshot consolidation from the vSphere Client. The vmware.log file under the VM folder may have more information.
When you run the second restore, just make sure it writes to the same file as the first restore. It will just download delta changes if it is the same file.
Don’t delete any files there. Those delta.vmdk files are snapshot files which should go way after you do a snapshot consolidation. You can try running 'vim-cmd vmsvc/snapshot.removeall` but that doesn’t seem as reliable as doing it from the vSphere client.