Dumping a mysql or mongo database for offsite backups

If you’re doing offsite backups of your file system, AS YOU SHOULD BE, you should also dump your database periodically and include those in your offsite backups.

Here is a simple bash script that will create a timestamped sql/gzip file of all your databases using mysqldump. You can use this file to import back into a database should you need to restore later.

First you need to install the mysql client libraries (debian):

…this should provide the mysqldump binary we will be using in the script below:

I create a directory in my home called ~/data and run the script from this directory. Ideally, you should just write a cronjob that does this for you every couple of days or hours, depending on the amount of data and frequency you’re dealing with.

Here is an example mongodb backup script that does basically the same thing, creating a timestamped .tgz file of the ./dump directory (mongo’s dumper outputs a directory of data):

I then use rsync to remotely backup my /home/username/data directory offsite to another server and it will simply transfer whatever files have changed on the filesystem.

Be careful with the –delete flag as it will delete files off your target server that don’t exist on the local/source server. Also, you can use –dry-run to see a preview of what will happen before you actually delete or transfer anything.

Keep in mind that if your files somehow get corrupt, you will not have a snapshot from a previous date. You can use rsnapshot

for incremental backups. You can also drop the –delete flag, which will keep the older backup files on the remote server intact.