
With Duplicati I ran it from a docker image on my NUC. The service is free, you only pay for the storage you use. Unlike CrashPlan it isn’t continuous (can do hourly), will send failure emails, and won’t automatically include new folders in a root if only some of the subfolders are selected.

Like CrashPlan it can do dedupe at the block level, compression, and allows you to specify your own encryption. For Hyper Backup, you can backup to multiple different storage providers, including Azure which was my preferred. Of the 3 tools, Hyper Backup was really the only one I consider as Glacier is for snapshots and Cloud Sync isn’t really a backup product. I actually was running CrashPlan in a docker image on the NAS prior to doing this assessment. I store most of my data on my Synology NAS, and it comes with some built in tools (Glacier Backup, Hyper Backup, and Cloud Sync). Plus, even with 1 device, it was going jump from $50/year to $120 – while not horrible, definitely a motivator. However, with the changes awhile ago (and continual changes I get emailed about), I knew it was time to look for other options. There are definitely some good things about it: continuous backup, dedupe at the block level, compression, and you can provide your own encryption key.

I have used it from two different continents successfully. CrashPlanĬrashPlan has served me great for a large number of years. CloudBerry Synology – not supported anymore.Possibly due to no way to test and you have to use their storage iDrive – I’m not 100% sure why anymore.Cloud Sync (Synology) – This is just a mirroring, which isn’t really a backup.Glacier Backup (Synology) – Snapshot only backups, I wanted something more incremental to keep costs in check.The ones that were quickly removed and not tested: In particular, I was interested in using non-proprietary storage backends that allowed me multiple options (B2, AWS, Azure, etc). I’ve only included the main contenders below. It wasn’t as painless as I’d like – partially my fault with mounted drives to the container read only, partially the GUI isn’t super great yet, and really that Azure connections continually getting reset and the underlying CLI doesn’t account for that – but it’s all back and humming along again. They were all backed up with Duplicacy, and while I had tested it before with a few files, you never know. This is hosted in a docker image, and currently pushes data to an Azure storage account.Īlso, wow, just had a slight heart-attack while writing this as I removed Docker from my NAS, which blew away a whole share of my Docker data (14 different containers including all my NextCloud personal files!).
#Install duplicacy cli only free portable
If files are very important I will also copy them to a portable NAS and put it in my vehicle.Tl dr, I’m using Duplicacy with the new Web UI.

RSnapshot is just a perl script that uses hardlinks to reduce disk space usage of duplicate files. I then have rsnapshot installed and it will be a local snapshot of files I consider important enough that I want a few versions of. One could also define other shared directories in rsnapshot like /home/username/.config or the entire home directory if you have the disk space for it.

something_unique just being a unique directory that contains anything or everything I care about. data/something_unique /opt/something_unique /home/username/something_unique and so on. I can describe my method that has worked well for a few decades but it might not be for everyone.īefore planning a backup I first ensure all the files I care about are isolated into unique directories not shared by anything I don't care about.
