Double-Take is a server mirroring/replication application that runs on Windows NT 4.0 SP4+, Windows 2000 Server and Windows Server 2003. As you'd expect from this type of application, the purpose is to provide a fail-over capability so that in the event of a server going "pop", there is a backup with an up-to-date copy of its content and configuration which can automatically take over the workload.

Double-Take is not an all-or-nothing clustering product: it can be used just for file mirroring and replication if that's all you need. By that, we mean pointing it at file sets and letting it replicate them from their source to a safe place.

Setting file replication up is a simple wizard-based trick (the control panel applications run under MMC, by the way, and can be run from a client workstation, not just a server). You point it at the source and target locations (which are made available through normal file sharing techniques – so you can replicate a number of sources to a single mirror if you like) and that's about it. It'll make an initial copy of the files from source to destination, and will then watch the source for updates and copy the changes as needed. There's a simple progress indicator (not unlike the kind of thing you see when formatting a disk under MMC, actually) which gives a straightforward view on the progress of the file copying activity, and a verification tool lets you compare the source and target if you so wish.

Fail-over is more than mirroring
Obviously many users don't want to mirror just files – they need the backup to have the ability to take over the entire server's personality and keep things running. This is where the fail-over component comes in. The way fail-over works is for the target server to keep a frequent watch on the source server, and to reconfigure itself to look like the source server in the event that communication is lost. The user gets to configure the criteria of the intercommunication – so you could say: “Monitor every three seconds, and fail-over if you've had five consecutive failed attempts”.

Rather cleverly, the system can watch multiple IP addresses on the source server, and you can choose whether to fail-over either when communication is lost to just one IP address on a server, or when all IP addresses vanish (which lets you keep running on the master server if a NIC has failed so long as the others are working OK).

There's a decent level of configuration to the fail-over itself, too. First, you can choose whether the target server adds the source server's identity to its own, or (presumably if performance matters) replaces its own identity completely with that of the master. You can also choose whether to fail-over just the IP addresses or whether to add the server name and/or selected file shares too. This kind of configurability is important because it's often the case that the target server is less powerful than the master, and also because in the event of a server failure, you may wish to fail-over some business-critical applications but not bother with other, less important stuff that runs on that machine.

You can add scripts to the fail-over configuration screen, and even tell it that manual intervention will be required (let's face it, there may be home-made apps that you simply can't fail-over automatically). Of course, you can monitor the fail-over situation of everything in real time via the GUI, and when a fail-over occurs you'll get a wad of stuff in the system event log to tell you what went on.

Fail-back needs management too
Recovering from fail-over is always going to be a little fiddly, simply by the nature of what you've done – if your master server goes "pop", you may well end up with a slave server that has the current version of part of your data and a master server that has the current version of the other part. 90 percent of the task of making recovery sensible, then, is to make sure you keep the data for fail-over-enabled functions separate from that for those that aren't to be failed over – then you simply use the product's "Fail-back" function and, if required, run the data restoration widget to get the data stores back in sync.

A final word of warning, then. Don't think for a moment that products like this give you instant, hiccup-free application performance – there will inevitably be some lag between the master server blowing up and your desktop users' systems starting to see the world again – but this common to all replication products, as it's down to the fact that (a) you have to let the two sides spend a few seconds making sure the outage isn't a temporary hiccup; and (b) if you have a server taking on another's IP address and/or share names, your network infrastructure and workstations themselves will take time to flush ARP caches and NetBIOS name caches.

Double-Take is, then, a replication and fail-over tool that is sensibly priced and isn't hard to use (though don't forget to think hard about exactly how you structure your filestores and shares).

OUR VERDICT

The product is affordable and usable; the main cost will (and should) be the time you spend planning your fail-over strategy so that (a) the backup service achieves the desired performance, and (b) you don't inadvertently make the fail-back process a nightmare.