On the face of it, computer storage sounds like the easiest thing in the world. After all, most of us don’t even think about the technology of our PCs’ internal hard disks – we just put stuff on them and expect it to stay there. If we’re particularly sensible we make the occasional copy of our stuff – perhaps on a CD-R – because hard disks sometimes break, but that’s just part of life.

In anything but the most trivial installation, of course, storage becomes more involved – to the level of obsession if you have a huge data centre that has to keep humming 99.9996 percent of the time. In this feature, then, we’ll look at the practicalities of storage from the ground up.

Disks
In most implementations, we have the choice of IDE (also known as ATA) and SCSI disks. IDE disks have been around since the IBM AT (of which Noah took a pair onto the Ark), and have their controller logic built into the card itself. Early IDE hard disks used the computer’s processor to handle the transfer of data between computer and disk, which became a bottleneck, but newer disks use DMA, which relieves the load on the CPU by not using it at all for data transfers. Although IDE disks are considered to be slow, modern versions such as “Ultra DMA” permit data transfers of up to 100Mbyte/sec, so don’t’ be fooled. Their main drawback for larger installations is actually that you can only connect two of them to the IDE socket on the motherboard.

SCSI disks are more expensive because they’re aimed more at the high-end applications and they have a number of benefits over IDE. The main bonus with SCSI is that depending on which flavour you have you can connect anything between eight (“Narrow SCSI”) and 32 (“Very Wide SCSI”) units to a single SCSI connector. SCSI has always use DMA-style access (and as such has been more expensive because you’ve had to buy a specialist controller to do the work). Then there’s SCSI’s ability to queue commands – the designers recognised that in big installations with many disks you’ll probably have many commands coming in at once, and catered for this from day one. Finally, you have the build-quality aspect: because big installations usually have to be reliable, the manufacturers generally build SCSI kit to much tighter tolerances, which means increased manufacturing costs.

RAID
Although disk technology is fast and reliable, it’s generally not regarded as being fast enough or, more importantly, reliable enough. Anyone who’s ever experienced a disk failure will have regarded it as a pain in the backside (if they had a backup) or a complete disaster (if they didn’t). RAID is a way of alleviating the hassle of failed disks.

There are six RAID levels, though people only really use three of them as the others are a bit esoteric.

RAID 0
… is also known as data striping. The idea here is to multiply the throughput by having multiple disks and writing a portion of each data item onto each. The idea is that moving the read/write head of a disk takes many hundreds of times longer than actually writing the data, and so if you can have two disks whose heads move simultaneously, you can write data twice as fast as if you had once disk (for each head move you write twice the data). You can, of course, have any number of disks, and the speedup multiplies appropriately. The downside of RAID 0 is that if one disk fails, it takes with it a little bit of each file you wrote. So when you’ve replaced the disk, you have to recover the entire data set, not just what was on that disk.

RAID 1
… takes the opposite approach, and says: “My disks are lightning fast, but what if one breaks?”. Every item of data is written simultaneously to two disks – so there’s no performance penalty, but if a disk fails you don’t lose the data. The downside here is that you need twice as many disks (and hence twice the ££) as you would in a single-disk setup.

RAID 5
… is a compromise between RAID levels 0 and 1. You have a number of disks, and both the data and various “parity” information are spread across all of the disks. The upshot is that you get some of the performance bonus of striping data, but if a disk turns up its toes no data is lost.

RAID 2, 3 and 4
… do exist but aren’t really used widely. RAID 2 is a bit like RAID 0 but stores parity instead of data on some of the disks. It’s slow, because the calculation of these parity “error” codes is complex. RAID 3 is again like RAID 0, but with parity information stored on a dedicated disk (which is fine until that disk dies, at which your error correction ability goes out of the window). RAID 4 is a derivation of RAID 3 that improves read performance but at the expensive of write performance.

Software vs. hardware RAID
Obviously if you want to use RAID, you’ll need some technology that knows how to write to the disks (and rebuild them when you unplug a dead one and stuff in a replacement). There are two ways: either you buy a RAID adaptor – a piece of hardware that does it all for you – or you use a standard SCSI card and let the operating system do the job of the RAID management. The usual compromise rules apply: if you do it in hardware, it’s more expensive but it’s faster; if you do it in software, the computer’s processor spends some of its time controlling disks instead of doing other work. This said, if you don’t particularly care that the server’s processor is doing the RAID work, it’s often an economical but efficient way to go.

Hot-plug
One thing worthy of a mention is that if you’re using anything but RAID 0, you’ll want to use hot-plug disks – ones that you can pull out and replace without turning the machine off. If you’re trying to maximise uptime, it’s a bit pointless having to turn off the machine and remove the lid to change a disk in a RAID 1/5 setup that could easily have kept on humming.

Internal or external?
Although in the past people generally had a pile of disks physically inside the computer, we have the option of using external disk arrays instead. These are really just big metal boxes with connections for the disks and power supplies (usually more than one, to allow for failures) which connect to the main host via SCSI or Fibre Channel links. SCSI is cheaper than Fibre Channel, but the latter has advantages: you can have the disk units up to 30m from the server; it’s less susceptible to electrical interference; and you can connect more than 32 devices.

The decision of whether to use internal or external disks is really one of: can I fit them all in the box? Most servers run out of space with eight or 10 disks, but you can stack up external arrays almost until you’ve filled the room.

Storage Area Networks
Once we have split the disk storage from the computer itself, we have another concept to consider – namely, can be clever about how we handle the communication of computer to disk unit. The answer is “yes”, and we do it with SAN technology.

Because we’ve split the disks from the computer, we have the opportunity of creating a whole new network dedicated to managing the storage of data. At the most basic level, one can connect a set of storage units to a Fibre Channel switch and allow this switch to do the work of figuring out how to get the commands between the computer and the disk units. This is a particularly attractive approach when one has many disk units – you’d otherwise be limited to the number of Fibre Channel cards you could cram into the server computer. It’s also a useful way to cater for server failures – by having both the primary server and a dormant backup machine connected into the network of storage devices, in the event of a failure on the primary unit the backup can wake up and be guaranteed immediate access to all storage units without the need to think about re-plugging.

At the top end of SAN concepts, technologies such as FCIP and iSCSI allow us to completely disconnect the disk and processing facilities, potentially even putting them on different sites and linking them through private or VPN WAN connections. So with the growth in high-speed WAN (or, more probably for this type of application, MAN) links, it’s perfectly straightforward to maintain an off-site data storage farm as a replica of the on-site one to cater for disasters within the office building itself.

Summary
Storage is as complicated or as simple as you want to make it. As with most technologies in the networking realm, because concepts such as Fibre Channel, SCSI, external disk arrays and so on are relatively popular, network managers are becoming more comfortable employing something which was rocket science five years ago. But don’t be too tempted to over-design your storage implemenation for the sake of it. The storage manufacturers will try to sell you two of everything (for resilience) and the fastest of everything (for performance) so think carefully before you sign the order for a million-pound data centre installation when all you wanted was software-based RAID 5 for your Exchange server.