Local & Offsite Backup Solution

Hello,

We was looking at getting some pointers from you @gchen. We’re attempting to setup a local and offsite backup solution from Vertical but we seem to be running into multiple issues.

I’ll explain how the setup currently works and then what we propose to do then perhaps you could comment the best possible solution.

Current Setup:

  • Daily vertical backup (using 4 threads) to local Synology NAS via NFS (this is executed via the cron to a .sh script which in turn checks if Vertical is already running- it would be good if there was some built in functionality to prevent multiple backups colliding)
  • Weekly vertical backup (using 4 threads) to a remote Synology NAS via SFTP (this is executed via the cron)

Local:
Now the local backup seems to be working fine other than every now and again we have to consolidate the snapshots or a random fail as mentioned here; https://verticalbackup.com/issue?id=5769928858664960

Remote:
The remote backup is where we are having the most issues. First of all the upload speed is only 1mbps (ADSL). Backups have taken between 1 and 5 days to complete often failing (and sometimes without any notification). I think it’s only happened once without a notification and a few times with a notification when a) a local backup killed it b) I killed the internet connection?

2018-06-08 16:00:02.398612 INFO STORAGE_CREATE Storage set to /vmfs/volumes/syn-01/
2018-06-08 16:00:09.150488 INFO SNAPSHOT_GETALLVM Listing all virtual machines
2018-06-08 16:00:09.409175 INFO BACKUP_VM Backing up dc1, id: 1, vmx path: /vmfs/volumes/SSD/dc1/dc1.vmx, guest os: windows9Server64Guest
2018-06-08 16:00:09.471941 INFO BACKUP_PREV Last backup at revision 29 found
2018-06-08 16:00:12.394991 INFO SNAPSHOT_POWER Virtual machine dc1 is powered on
2018-06-08 16:00:12.395180 INFO SNAPSHOT_REMOVE Removing all snapshots of dc1
2018-06-08 16:00:13.662907 INFO SNAPSHOT_CREATE Creating a new virtual machine snapshot for dc1
2018-06-08 16:00:20.625723 INFO BACKUP_UPLOAD Uploaded file /vmfs/volumes/SSD/dc1/dc1.vmdk
2018-06-08 16:00:20.639202 INFO BACKUP_UPLOAD Uploaded file /vmfs/volumes/SSD/dc1/dc1_1.vmdk
2018-06-08 16:00:20.639992 INFO BACKUP_UPLOAD Uploading file dc1-flat.vmdk
2018-06-08 16:00:20.640865 INFO RESTORE_THREAD Using 4 uploading threads
2018-06-08 16:06:12.112722 INFO BACKUP_UPLOAD Uploaded file dc1-flat.vmdk 174.81MB/s 00:05:51
2018-06-08 16:06:12.112836 INFO BACKUP_UPLOAD Uploading file dc1_1-flat.vmdk
2018-06-08 16:06:12.117137 INFO RESTORE_THREAD Using 4 uploading threads
2018-06-08 16:24:35.217890 INFO BACKUP_UPLOAD Uploaded file dc1_1-flat.vmdk 232.07MB/s 00:18:23
2018-06-08 16:24:35.220733 INFO BACKUP_UPLOAD Uploaded file dc1.vmx
2018-06-08 16:24:35.222065 INFO BACKUP_UPLOAD Uploaded file dc1.vmxf
2018-06-08 16:24:42.675069 INFO BACKUP_DONE Backup dc1@hv1 at revision 30 has been successfully completed
2018-06-08 16:24:42.675378 INFO BACKUP_STATS Total 317469 chunks, 317463.32M bytes; 3082 new, 3079.00M bytes, 1688.89M uploaded
2018-06-08 16:24:42.675434 INFO BACKUP_TIME Total backup time: 00:24:22
2018-06-08 16:24:42.704356 INFO SNAPSHOT_REMOVE Removing all snapshots of dc1
2018-06-08 16:24:45.002324 INFO BACKUP_VM Backing up ts1, id: 2, vmx path: /vmfs/volumes/SSD/ts1/ts1.vmx, guest os: windows9_64Guest
2018-06-08 16:24:45.100051 INFO BACKUP_PREV Last backup at revision 25 found
2018-06-08 16:24:46.095895 INFO SNAPSHOT_POWER Virtual machine ts1 is powered on
2018-06-08 16:24:46.096164 INFO SNAPSHOT_REMOVE Removing all snapshots of ts1
2018-06-08 16:24:47.356885 INFO SNAPSHOT_CREATE Creating a new virtual machine snapshot for ts1
2018-06-08 16:25:03.881751 INFO COMMAND_OUTPUT Create Snapshot:
2018-06-08 16:25:03.882028 INFO COMMAND_OUTPUT Create snapshot failed
2018-06-08 16:25:03.882095 ERROR COMMAND_RUN Command '/bin/vim-cmd vmsvc/snapshot.create 2 2018-06-08-16-24-47 'Created by Vertical Backup 1.1.5' 0 1' returned 1
2018-06-08 16:25:03.882196 INFO SNAPSHOT_REMOVE Removing all snapshots of ts1

I noticed you mentioned that SFTP retry is already built into Vertical although I’m not sure in what terms this works? Is this just one a chunk/file fails to upload it will retry that file. What if the connection drops for say 30 seconds/1 minute?
I also wonder if when the remote backup is failing its causing our snapshots to get messed up, thus having to consolidate the drives/snapshots to fix this?
As I’ve also mentioned on the ‘Issues’ page, purging doesn’t work and I know you mentioned Duplicacy can do multi-threading purging.

Next/Proposed Solution;
As the above currently isn’t working we were thinking of doing the following;

Keep the local backup running as is using Vertical because that tends to be working fine (hopefully we can isolate this issue requiring manual consolidation of snapshots).

We’d then have another weekly backup going to the local NAS. From the local NAS we’d then run Duplicacy to push the files from the ‘Weekly’ backup to our remote NAS via SFTP.

If the Duplicacy backup failed via SFTP.

  • Could we get this to retry if the connection dropped?
  • If the backup failed, would it pick up from where it left off or have to start over etc?

What are your thoughts on the above, perhaps you can give us some pointers on the best possible solution for what we’re trying to achieve.

I think using the copy command of the Duplicacy to copy selected backups from the local NAS to the remote NAS is the best option. You don’t need to run a separate weekly backup – instead, just figure out the revision numbers of the backups that you want to copy to the remote NAS and then pass these revision numbers to the copy command.

This wiki page explains how to back up to multiple storages with the copy command: https://github.com/gilbertchen/duplicacy/wiki/Back-up-to-multiple-storages

Hi @gchen,

We’re happy to use Duplicacy if it works for us but this is just taking to much time. Please could you answer the questions I raised above?

What you’ve suggested about finding out the revision numbers sounds a very manual process. Why would we not just setup a separate weekly share that way we wouldn’t need to check the revision numbers every time we want to copy something to the remote NAS as we’d then just setup a Duplicacy job to run every week that would copy everything from this share automatically…?

Kind Regards

What might be useful in this circumstance is if the duplicacy copy command supported tags, then you could use a single storage and copy out your weekly ‘tagged’ backups to remote storage.

Could we get this to retry if the connection dropped?

Currently Duplicacy doesn’t support retry in the SFTP backend. However, unlike the backup command, the copy command can be run multiple times without side effect. So it will be fairly easy to add retry to your script – just call the copy command several times, without checking the return code.

We’ll get retry into the SFTP backend in Duplicacy 2.1.1.

If the backup failed, would it pick up from where it left off or have to start over etc?

Yes, a copy command is fast-resumable and able to skip chunks previously uploaded.