Where to send Sympl Backups?

Ahoy there,

I don’t have an issue per se my Sympl installation is running perfectly. I’m just wondering how other users are backing up machines running Sympl?

Sympl’s default backup location is to /var/backups/localhost and /var/backups/MySQL
The support docs recommend synchronising all local backups to another server on another network.

I have a machine hosting one small website plus 250 mailboxes. In terms of resource the server is coping however I’m always wary of running out of system disc space due to the amount of mail being stored on the server.

Is it a Sympl system requirement to backup to /var/backups/localhost and /var/backups/MySQL or can I send the backups to some archive grade storage discs on another machine? What are the pros and cons of doing so?

How are you guys out there maintaining backups?

All the best - Pete

You can simply mount /var/backups to the archive grade disk. That works well for me as it means my server does not crash when the root disk becomes full.

That does assume the archive grade disk is mountable from your machine. That really depends on your hosting.

I have added an rsync command to the backup procedure that sends the backup to my NAS drive at home. There is a thread here - Testing backup.name about how to do this.

I have uninstalled the Sympl backup module.
Instead, I have these daily backup processes:

  • Dump all databases to .SQL files (I might have copied the Symbiosis code for this)
  • Daily rsync snapshot to an archive grade disk.
  • Borg backup to the same archive disk
  • Identical-spec Borg backup to a remote server

Most of the few times I’ve had to restore anything, the daily rsync copy sufficed as the problem was less than a day old, and that’s the easiest to do.
The Borg backups give me daily, weekly and monthly snapshots in a similar way to backup2l in Symbiosis and Sympl.
I have a remote server for other reasons, but if you don’t, backup to the home network like @fogma does is a good alternative, which I’ve also done in the past.

I’m not sure exactly why I chose Borg, but I also used rsnapshot for a while before switching to Borg. It includes deduplication, compression, encryption and seamless working over a network. It take a bit of work to set up but there’s helpful documentation.

Another vote for borg here. I do hourly backups of my servers using borg to https://www.borgbase.com/ (no connection other than a happy customer). If necessary I can then restore file changes with an hourly ‘resolution’ should that ever be necessary. Borg is de-duplicating, so doesn’t consume loads of disk space just by backing up the same files every hour. I have borg set to automatically cleanup the backups, keeping something like the last 24 hourly backups, the last 7 daily backups, the last 4 weekly backups and the last 12 monthly backups.

I think I also have borg set to backup the contents of /var/backups so I could get back to one of them too.

Andy

I had a quick look at borg, but I’m not sure that I understand the difference between borg and backup2l.
Perhaps if you move a file, borg notices but backup2l assumes the new file is new.
But otherwise, what’s the difference?
Backup2l only backs up changed files in each new backup. Does borg do something different, such as only saving diffs of individual files? Sounds risky to me.

From https://borgbackup.readthedocs.io/en/stable/

BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets.

The useful thing about Borg for me is that you can ‘mount’ any of your backup sets, and just browse the state of the files when that backup was taken. This means I can backup all files every hour, but the only thing that’s stored on the remote is the differences. However, each backup set is treated as a ‘full’ backup, so restoring a file at any point in time is easy.

I just have it set to backup every hour, and then run the prune process immediately afterwards to keep the last 24 hourly backups, last 7 daily backups, last 4 weekly backups, and last 12 monthly ones. I can then restore a file at any of those points if I need to.

Andy

Sorry, the difference still passes me by. Not saying there isn’t a difference: I just don’t understand what it is.

Backup2l also only stores what is changed since the last backup in each incremental backup.

OK, you have to search for versions of a file and can’t “browse” but this is a backup: I’ve never browsed a backup, or wanted to.

With backup2l, each night* a backup is done, and it only stores what changed since the last backup.
Does BorgBackup do something different? If so, what?

*Once a day is ample for me: it’s several years since I needed to restore a file from any backup.

[Later]

Aha! Now I get it. backup2l backs up file by file, whereas Borg backs up chunk by chunk. So if a bit of a file changes, only that bit is backed up in the incremental. That would save a lot of space, at the cost of losing redundancy.

Also I’m fairly sure that backup2l doesn’t do deduplication, so if your server has many copies of the same file (for example mine has several WordPress installations on it) Borg’s backup uses barely more than the storage for just one copy. And the deduplication is also done at the block level, not the file level.
There’s also a problem with backup2l malfunctioning if filenames are a bit weird. Some of it is done by shell scripting and it’s subject to some of the pitfalls with file names containing spaces or quote characters. I don’t remember the exact details and I don’t know if it was fixed.

So if you have lot of similar files, it only keeps track of how many, where they are and what the differences are?
That sounds quite promising, I must say.

Can you share the scripts, or would they only apply to your setup?

It certainly isn’t a sympl requirement to use those locations:
On one machine I have it backing up to an archive grade storage, and that seems to work.
I simply changed the backup2l config and the pre-backup sql dump script.

I hear your pain about mailboxes. Google and the other email services have encouraged users to keep all manner of garbage. After a while it’s too much to clear out. But no storage is infinite.

I can’t remember where I got the ideas for this: it might based on the examples on the borg web site.
The details will vary, but this is what I do for a local backup, run from cron.

  • My archive volume is mounted on /arch

  • /arch/backup/mysql contains previously created database dumps.

  • The remote backup is very similar, but $BORGREPO is an SSH style URL in the format root@example.com:/path/to/repo

  • The “break-lock” line is to enable graceful recovery if something crashes during the backup, otherwise the next backup will never run.

  • The prune line keeps daily backup for a week, weekly backups for a month and monthly backups for 6 months. Obviously you can tune to suit your needs.

    DUMPDIR=/arch/backup/mysql
    BORGREPO=/arch/borg

    export BORG_PASSPHRASE=“this is not my real passphrase”
    borg break-lock $BORGREPO
    borg create $BORGREPO::{now:%Y-%m-%d} /srv /etc /var /home /usr/local $DUMPDIR

    borg prune --keep-within=10d --keep-weekly=4 --keep-monthly=6 $BORGREPO

I hope that helps :slight_smile:

1 Like

Hairy Dog Sir,
Could you be more specific about which lines to change within the backup2l config and the pre-backup sql dump script. Your working config example would be a bonus!
Rgds Pete

This is the set of scripts I use:

The example settings file may well be out of date in the repo, check the actual backup script to see if I’ve added anything and not documented it properly!

Andy

The change to the msql dump was to add a new file called /etc/sympl-sqldump.config containing the lines:

DUMP_DIR=/mnt/data/backups/mysql
KEEP_MAX_COPIES=“1”

and I altered
/etc/sympl/backup.d/conf.d/10-directories.conf
to say
#BACKUP_DIR="/var/backups/localhost"
BACKUP_DIR="/mnt/data/backups/localhost"

Thank you Hairy Dog you’re a real star :+1:

I’m trying to get the offsite backups working again on Bytemark, but I can’t remember the correct address, and Bytemark support are utterly unwilling to offer any help at all.

It’s what they call a “legacy DH” but I can’t work out the precise format to put into the pre-backup and post-backup scripts.

If anyone has a working Symbiosis setup, can they tell me what is in symbiosis/host and what is fileutils, both of which are referred to in the pre-backup script?

I might be able to assist. Can you be more specific regarding the location of the files?

The files I have are in etc/(symbiosis or sympl)/backup.d/pre-backup and post-backup.
They refer to a file called symbiosis/host but I can only guess where they is and what’s on it.
I’m particularly keen to know the precise format of the host address so that I can replicate it on my system, because the files seem to read that to identify that off-site backup location.

I would try writing the location of your external host to /etc/symbiosis/dns.d/backup.name