Backup to multiple destinations

Is it possible for verticalbackup to backup to multiple destinations?

Currently, the cron management in vb only creates a single cron entry, rather than being able to specify different command lines at different times to allow backup to different destinations to be run at different times. I presume this will work if we manually specify the cron entries (and for now, avoid using vb’s built in cron management)?

Equally, this would also be great if vb’s cron management were improved to allow different cron entries so that we could offset backups times of specific VMs even if it’s to the same destination.

Thanks

You can create multiple directories, like /opt/vertical1, /opt/vertical2,…, and initialize them to back up to different storages. However, as you pointed out, the built-in cron management would not work and you’ll have to create your own cron entry.

But the recommended way to backup to multiple destinations is to run Duplicacy on a spare computer and use the copy command to copy backups from one storage to another. This not only saves computing resources on ESXi, but also creates identical backups on different storages. Besides, Duplicacy supports more cloud storages than Vertical Backup.

You don’t need a license to run the CLI version of Duplicacy if the use case is to manage backups created by Vertical Backup. Let me know if you need help with setting up Duplicacy.

Hi -

I read this response with interest, as I need the same - a local backup to a secondary storage attached to my esxi, and an offsite storage on the RAID at an office location (which I’ll refer to as “local” from now on).

So I have played with duplicacy to get this to work. It was more difficult than I expected, because the instructions for the “duplicacy add -copy …” command were really not clear to me.

I think that I figured it out by trial and error. On my office machine where I want the secondary storage, I first added the remote repository on my esxi machine, then I did the “duplicacy add -copy X Y Z” command to add the local storage.

Now, this seems to have worked, BUT…

I initiated a copy, and it is super duper slow. I have a 40mbit download on the incoming line. This 125GB has already taken over 24 hours, and it is reporting 14149/112760 chunks so far. That’s only 13%, meaning it will take 8 days to back up.

Furthermore, I made a previous backup to the same local office machine storage, directly from the esxi machine before. So there should be many many duplicate chunks already in the local storage. But it seems duplicacy is not taking advantage of this.

Can you let me know if I am doing this right, and if so, how to speed this up? If backups are this slow, even with only incremental changes, I have many VM’s and I think it won’t work out.

It would be very very helpful to have a guide from you on how to do this right, since it is not very obvious.

Thanks
Morgan

Are you running Duplicacy 2.0.9 version? There were a few optimizations for the copy command in 2.0.9 which should make a lot of differences if you’re running an older version.

There was another optimization after 2.0.9: https://github.com/gilbertchen/duplicacy/commit/0bf66168fb7b53b72bf93afd14a90cc9508998bf, but I don’t think this one matters if your previous backup was completed.

Hi -

I was on 2.0.3. I just grabbed 2.0.9 and am running it - it is now showing that it is recognizing chunks that already exist. So that’s progress! It still seems on the slow side, but definitely faster than it was. It is taking 1-3 seconds for each chunk, even if they already exist on the destination (40mbit connection here, 1Gbit at the remote site, so it seems like it should be faster).

Do you think the new post-2.0.9 optimization would help? This hasn’t completed, it’s only 81 out of 114930 chunks so far :slight_smile:

It is taking 1-3 seconds for each chunk, even if they already exist on the destination

This is strange. Can you run duplicacy -d -log copy ... and post a few lines of the log here?

Here are some logs:

2017-09-28 19:32:48.344 INFO SNAPSHOT_COPY Chunk d92e8d1c98bc43b9a143bf63265bb4bf0e4a16ae0d44f45486394e85dc731811 (4/114930) copied to the destination
2017-09-28 19:32:51.408 DEBUG CHUNK_DOWNLOAD Chunk 41c502d0a422eb72c0fcd21946b15398fc374650b0c63daa8c9dd85f9d0bf3b5 has been downloaded
2017-09-28 19:32:51.409 DEBUG SNAPSHOT_COPY Copying chunk cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1 to cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1
2017-09-28 19:32:51.409 DEBUG DOWNLOAD_FETCH Fetching chunk cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1
2017-09-28 19:32:51.470 DEBUG CHUNK_DUPLICATE Chunk 41c502d0a422eb72c0fcd21946b15398fc374650b0c63daa8c9dd85f9d0bf3b5 already exists
2017-09-28 19:32:51.470 INFO SNAPSHOT_COPY Chunk 41c502d0a422eb72c0fcd21946b15398fc374650b0c63daa8c9dd85f9d0bf3b5 (5/114930) exists at the destination
2017-09-28 19:32:55.242 DEBUG CHUNK_DOWNLOAD Chunk cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1 has been downloaded
2017-09-28 19:32:55.243 DEBUG SNAPSHOT_COPY Copying chunk 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e to 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e
2017-09-28 19:32:55.243 DEBUG DOWNLOAD_FETCH Fetching chunk 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e
2017-09-28 19:32:55.310 DEBUG CHUNK_DUPLICATE Chunk cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1 already exists
2017-09-28 19:32:55.310 INFO SNAPSHOT_COPY Chunk cee0145363a2e95b33acb0175bdf6fde3a6bcf3de55b4c8d21541cb25dcc4ed1 (6/114930) exists at the destination
2017-09-28 19:32:57.776 DEBUG CHUNK_DOWNLOAD Chunk 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e has been downloaded
2017-09-28 19:32:57.777 DEBUG SNAPSHOT_COPY Copying chunk 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4 to 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4
2017-09-28 19:32:57.777 DEBUG DOWNLOAD_FETCH Fetching chunk 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4
2017-09-28 19:32:57.836 DEBUG CHUNK_DUPLICATE Chunk 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e already exists
2017-09-28 19:32:57.836 INFO SNAPSHOT_COPY Chunk 1d2434bd1134bb97fd509a9b87c0946de8e4afc4dfd33435944828c9ce31e24e (7/114930) exists at the destination
2017-09-28 19:33:01.741 DEBUG CHUNK_DOWNLOAD Chunk 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4 has been downloaded
2017-09-28 19:33:01.742 DEBUG SNAPSHOT_COPY Copying chunk 2a9c845af5276fcb8a0b91f2989e1446e2a61ec70363e81aae7029825ceb60c6 to 2a9c845af5276fcb8a0b91f2989e1446e2a61ec70363e81aae7029825ceb60c6
2017-09-28 19:33:01.742 DEBUG DOWNLOAD_FETCH Fetching chunk 2a9c845af5276fcb8a0b91f2989e1446e2a61ec70363e81aae7029825ceb60c6
2017-09-28 19:33:01.802 DEBUG CHUNK_DUPLICATE Chunk 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4 already exists
2017-09-28 19:33:01.802 INFO SNAPSHOT_COPY Chunk 1a851d960260e26fef09ef2a5dc46c6c8b93c0cabc85155e11d0fa19390191f4 (8/114930) exists at the destination

I suspect that the previous backup to the copy destination storage didn’t complete so it left many chunks there but no backups. If that is the case then the post-2.0.9 optimization would definitely help, since now it just lists all chunks on the destination storage instead of finding them through existing backups.

okay, yay!

I am not very familiar with go, but somehow by trial and error I managed to get duplicacy updated to the post 2.0.9 version, and it is working FAST now!

In under an hour, it’s almost 20% done, it now skips duplicate chunks very quickly.

Thanks!

One more question now that this is working:

is there any special procedure when I want to copy from multiple different remote storages to my local office storage?

Or do I just do several more "duplicacy add " to get it working?

A clear step by step will be very helpful, as the trial and error approach is very time consuming.

Thanks

Were these remote storages initialized separately using the init command? If so, then you may not able to copy them to a single destination storage because their config files are compatible with each other.

To create storages with compatible config files, use the add command with the -copy option. Or you can simply copy by hand the same config file to different storages.

Once you’ve done that you can use the copy command to copy from multiple storages to one destination storage. Just remember that the repository ids should be different otherwise there will be some conflicts.

Sorry about the lack of better documentation. I’ll start working on a wiki guide for Duplicacy.

Hi @gchen,

I got this set up and its working fine.

I didn’t have to copy the config files, since they were all created at the default vertical backup settings, and appeared to be identical.

Thanks for the help, and I look forward to the guide.

Right, if you don’t enable encryption then the init command will create identical config files with the default settings.

Hi

Now I am onto the next related problem.

I’m trying to set up duplicacy via a launchd job on MacOS to run this daily.

I created a script run-duplicacy.sh that is run by the launchd upon load and then once nightly.

When I run this script locally, it finds the .duplicacy preferences file

When I run this script via launchd (launchctl load), it fails.

So I tried specifying the location of the prefs file, just like this:
duplicacy copy -pref-dir /Users/morgan/.duplicacy/ -from …

However, duplicacy chokes on this.

How can I set the environment to specify the preferences that lists all my repositories?

Thanks!

I usually run Duplicacy in a script this way:

cd /path/to/repository && duplicacy copy -from ...