Automated remote backup

Permalink
For my WordPress websites, I use VaultPress. For a scant $15 per month, it gives me an automatic continuous live backup of the whole website. The backup goes to servers in three separate locations. And I don't need to do anything.

Is there anything analogous for Concrete5?

I am aware that I can back up certain parts of a Concrete5 website to my desktop. However, our website will be far too massive for such a thing.

Perhaps there is a hosting entity that does regular backups? A third-party backup service of some kind or other?

TRV
 
Job replied on at Permalink Reply
Job
TRV

We have got an automated database backup that gets e-mailed on a daily basis. There's an open source library out there that does this sort of thing and is easy to maintain.

Alternatively, it wouldn't be too hard to write a script that runs on a CRON that uploads a database dumb a remote server (I'll assume you have one).

Hope this helps.

Job.
MrHyde replied on at Permalink Reply
MrHyde
This may sound like a stupid question, but any time you need to back something up, you need to ask yourself why?

If your answer is to back it up against catastrophic data loss, then a chron job as mentioned above is a good idea... then the question becomes how many copies do you want to save, and how much space can they take up (btw, I HIGHLY suggest bzipping your sql files for maximum compression). The following tutorial seems pretty on point if you want to acomplish that:
http://dbperf.wordpress.com/2010/06/11/automate-mysql-dumps-using-l...

Now I would caution you that using mysqldump will, temporarily, lock your tables while it dumps your data... it does that so nothing else is inserted in the middle of your backup to preserve data integrity.

If you want to be able to "roll back changes" to your database (perhaps you did something REALLY dumb)... then there are some systems set up similar to Apple's Time Machine functionality for MySQL.

My backup of choice is ALWAYS clustering the database (so I run it in two places at once)... note, this isn't replication, MySQL's built in replication system sucks pretty hard (it's very fault intolerant, if it fails, it does so silently and won't alert you, it requires you to lock your database everytime you want to start a new instance or resolve a problem... etc). I use Galera, which is an open source build of MySQL that supports clustering... and it's free:
http://codership.com/products/mysql_galera...

This gives me the ability to run backups on one of the nodes, and if anything happens during the backup, it is AUTOMATICALLY switched to slave, and resynced... If a nuclear weapon hits one of my datacenters, it's no big deal, I've got a copy of MySQL already up and running to replace my data with.

Galera is also pretty good at syncing across long distances without bogging down the server, so you COULD run an instance of it on your server, and run an instance of it at home, and have your home desktop join the cluster at night and sync up then leave.