Backup is the motherhood and apple pie of system admin. Everyone agrees it's a Good Thing and you can never have too much of it. Neither does it take a national disaster - fire, flood, hacker attacks, terrorist bombs or a sys admin typing "rm -r .*" while logged in as the superuser - before the backups come into play. The vast majority of restores from backup follow mundane user errors - simple mistakes like deleting the wrong file or accidentally updating an old version.
There are plenty of excellent proprietary backup packages, but for those with other things to spend their money on Unix provides all the tools needed to create reliable backups and restore from them when the need arises, as it inevitably will.
Full system backups
The baseline for any backup policy is the full system backup. Any of the basic Unix I/O utilities: dd; dump; cpio; or tar, could be used but dd has an important advantage. The command:
dd if=/dev/hda2 of=/dev/nst0
creates an image of the disk hda2 on tape (assuming your tape drive is /dev/nst0). This may not seem particularly useful, since you can't restore an individual file from the tape. However, it's not as worthless as it looks; if the system is wiped out completely the disk can be restored with a single command while running a mini-system booted from CD or floppy.
Thus it's probably worth making an initial backup of the system disk this way, to use as the starting point should it become necessary, God forbid, to rebuild everything from scratch.
However for routine use you want a backup that allows selective restores and there, for simplicity, reliability and portability it's hard to beat tar. To be fair, dump has many attractive features and may be more suited for use as a backup tool but the universality of the tar format means backups made on any Unix or Linux system can be restored on any other, anywhere, ever.
So, tar it is then. Be warned though that life is never quite as easy as it should be, particularly if you're a Unix sys admin! The obvious command for a full system backup:
tar cvf /dev/nst0 /
Every Unix version has files that can't be backed up, such as the /proc directory under Linux. Equally, there are other directories that could be backed up but probably shouldn't be, such as /tmp and /dev. So the right idea will be something like:
tar cvf /dev/nst0 -X xclude.list /
where the file "xclude.list" defines a series of filemasks for files to exclude, such as:
etc. You may also want to add NFS-mounted filesystems to the list as performing full backups over the network is one surefire way to get yourself noticed by the users, and not in a nice way.
Next, you'll need a list of the files backed up. And you probably want it all to happen automatically. So wrap the tar command in a shell script (this example is for csh or tcsh) along the lines of:
set BACKUPLOG=$BACKUPHOME"/weekly."`date –I`".log"
tar cvf /dev/nst0 -X $BACKUPHOME"/xclude.list" / >& $BACKUPLOG
and run it via cron at 2am every Sunday morning. This will store the backup logs in files with names like "weekly.2004-02-25.log" in the directory /home/sysadmin/backup_logs. One neat advantage of using the date in international format ("yyyy-mm-dd") within the log file name is that a normal alphabetic sort will list the logs in time order, oldest first.
The last command in this sequence gives us a file whose date-stamp coincides with the last full backup. It will be used to control subsequent incremental backups.
Find your next job with techworld jobs