restore thick vs thin size and location

hi again -

once my backup completed, I tried a restore on a remote esxi box with a 100mbps connection. The restore got halfway:

Downloading **************************************************---------------------------------------- 24.36MB/s 02:35:54 55.8%
Failed to write to '/vmfs/volumes/datastore1/vmfs/volumes/datastore1/webuzo1/owncloudConv-flat.vmdk': No space left on device

So I looked at the destination, which started with 500GB free - and this VM takes less than 100 GB (thin provisioned). Sure enough, it’s empty and the VM only was half way restored:

VMFS-5     1016028200960 1016028200960          0 100% /vmfs/volumes/datastore1

So, if one uses the datastore browser inside the esxi client interface, it is possible to see the actual size of the drive for thin allocations.

Here’s the original:

http://privt.s3.amazonaws.com/webuzo-orig.png

Here’s the one being restored (that was interrupted before completion):

http://privt.s3.amazonaws.com/webuzo-restored.png

You can see the difference. The filesystem is registering the first correctly at a total size of 48.09 GB. The second is taking far more disk space, because it is no longer respecting the thin provisioning.

Anyway, I thought I’d let you know. For this project I must find another solution, as I’ve spent too much time on what should be a simple transfer of a VM from one machine to another.

However, if you can get this and some of the other issues that we’ve discussed fixed, I may try VB again for regular backups.

Thanks

If the vmdk file already exists in the destination then Vertical Backup will respect the thin provisioning. So the workaround is to create a dummy disk file as a sparse file beforehand:

[root@esxi55:/opt/vertical] dd if=/dev/zero of=/vmfs/volumes/datastore1/vm-test/vm-test-flat.vmdk seek=16G bs=1 count=1
1+0 records in
1+0 records out
[root@esxi55:/opt/vertical] ls -lsh /vmfs/volumes/datastore1/vm-test/
total 1024
  1024 -rw-r--r--    1 root     root       16.0G Jul 11 18:06 vm-test-flat.vmdk

Now Vertical Backup will not write unnecessary zeros to the existing file:

...
Backup vm-test@esxi55 at revision 1 has been successfully restored to /vmfs/volumes/datastore1/vm-test/
Total 16387 chunks, 16384.00M bytes; 1974 new, 1974.00M bytes, 921.99M downloaded
Total restore time: 00:01:05
[root@esxi55:/opt/vertical] ls -lsh /vmfs/volumes/datastore1/vm-test/
total 2124800
2124800 -rw-r--r--    1 root     root       16.0G Jul 11 17:53 vm-test-flat.vmdk

The disk file now takes only 2G instead of 16G.

The fix would work the same way – create a sparse file of the same size if the file doesn’t exist before restoring the file.

This and the sftp retrying will be the next two things I’ll be working on for Vertical Backup.

Hi - sounds good - if you let me know when you’ve got those fixes incorporated, I’ll try it again. For now, I came up with another solution for this task. However, if it works easily and reliably for backup (without too many workarounds), I am still interested.

Thanks

Version 1.1.0 now supports restoring thin-provisioned disks and retrying on SFTP errors.