Is it possible for verticalbackup to backup to multiple destinations?
Currently, the cron management in vb only creates a single cron entry, rather than being able to specify different command lines at different times to allow backup to different destinations to be run at different times. I presume this will work if we manually specify the cron entries (and for now, avoid using vb’s built in cron management)?
Equally, this would also be great if vb’s cron management were improved to allow different cron entries so that we could offset backups times of specific VMs even if it’s to the same destination.
You can create multiple directories, like /opt/vertical1, /opt/vertical2,…, and initialize them to back up to different storages. However, as you pointed out, the built-in cron management would not work and you’ll have to create your own cron entry.
But the recommended way to backup to multiple destinations is to run Duplicacy on a spare computer and use the copy command to copy backups from one storage to another. This not only saves computing resources on ESXi, but also creates identical backups on different storages. Besides, Duplicacy supports more cloud storages than Vertical Backup.
You don’t need a license to run the CLI version of Duplicacy if the use case is to manage backups created by Vertical Backup. Let me know if you need help with setting up Duplicacy.
I read this response with interest, as I need the same - a local backup to a secondary storage attached to my esxi, and an offsite storage on the RAID at an office location (which I’ll refer to as “local” from now on).
So I have played with duplicacy to get this to work. It was more difficult than I expected, because the instructions for the “duplicacy add -copy …” command were really not clear to me.
I think that I figured it out by trial and error. On my office machine where I want the secondary storage, I first added the remote repository on my esxi machine, then I did the “duplicacy add -copy X Y Z” command to add the local storage.
Now, this seems to have worked, BUT…
I initiated a copy, and it is super duper slow. I have a 40mbit download on the incoming line. This 125GB has already taken over 24 hours, and it is reporting 14149/112760 chunks so far. That’s only 13%, meaning it will take 8 days to back up.
Furthermore, I made a previous backup to the same local office machine storage, directly from the esxi machine before. So there should be many many duplicate chunks already in the local storage. But it seems duplicacy is not taking advantage of this.
Can you let me know if I am doing this right, and if so, how to speed this up? If backups are this slow, even with only incremental changes, I have many VM’s and I think it won’t work out.
It would be very very helpful to have a guide from you on how to do this right, since it is not very obvious.
Are you running Duplicacy 2.0.9 version? There were a few optimizations for the copy command in 2.0.9 which should make a lot of differences if you’re running an older version.
I was on 2.0.3. I just grabbed 2.0.9 and am running it - it is now showing that it is recognizing chunks that already exist. So that’s progress! It still seems on the slow side, but definitely faster than it was. It is taking 1-3 seconds for each chunk, even if they already exist on the destination (40mbit connection here, 1Gbit at the remote site, so it seems like it should be faster).
Do you think the new post-2.0.9 optimization would help? This hasn’t completed, it’s only 81 out of 114930 chunks so far
I suspect that the previous backup to the copy destination storage didn’t complete so it left many chunks there but no backups. If that is the case then the post-2.0.9 optimization would definitely help, since now it just lists all chunks on the destination storage instead of finding them through existing backups.
I am not very familiar with go, but somehow by trial and error I managed to get duplicacy updated to the post 2.0.9 version, and it is working FAST now!
In under an hour, it’s almost 20% done, it now skips duplicate chunks very quickly.
Were these remote storages initialized separately using the init command? If so, then you may not able to copy them to a single destination storage because their config files are compatible with each other.
To create storages with compatible config files, use the add command with the -copy option. Or you can simply copy by hand the same config file to different storages.
Once you’ve done that you can use the copy command to copy from multiple storages to one destination storage. Just remember that the repository ids should be different otherwise there will be some conflicts.
Sorry about the lack of better documentation. I’ll start working on a wiki guide for Duplicacy.