Some RAID implementations have no resilience at all – RAID 0, or "data striping", is the classic example. Others have absolute resilience – RAID 1, or "data mirroring", keeps an exact copy of the data on disk A on a second disk, B, in case A should turn up its toes.

By far the most popular general-purpose RAID implementation, RAID 5, uses a hybrid technique to provide resilience (a set of disks can keep humming even if one fails) but without the need to have a spare for every disk in the array. It does this by storing "parity" information over the disk set.

But have you ever wondered just what's meant by this "parity" information? In this RTFM, we'll tell you.

Boolean logic
Let’s start with a quick reminder of some basic Boolean logic – all that AND, OR and NOT stuff. We're interested in the Exclusive OR, or XOR function, which works like this:

InputsOutput
Input 1Input 2Result
000
011
101
110


The XOR function returns 1 if the two input bits are different, or 0 if not. The “truth table” above shows this for individual binary digits 0 and 1, so let’s now take it to whole bytes. Let's assume we have the values 192 and 128, and we XOR them:

DecimalBinary
Input A19211000000
Input B1281000000
Result (A XOR B)6401000000

We can see here that the result of 192 XOR 128 came out as 64. The neat trick with the XOR function is that it’s self-reversing. For example, let's do 64 XOR 128:

DecimalBinary
Input A6401000000
Input B1281000000
Result (A XOR B)19211000000
An even neater trick is that this self-reversing property works on any sequence of numbers. Imagine we have five numbers – A, B, C, D and E, and that the result of A XOR B XOR C XOR D XOR E is P. We can swap P with any of the numbers in the sequence, and the result of the XOR sum will be that number – so, for instance, A XOR B XOR P XOR D XOR E = C.

Moving back to disks
Okay, this is all very well, but how does it apply to disks? It’s probably obvious – instead of our numbers A, B, C, etc, let’s assume that we have disks A, B and C, etc, and that each disk contains a (large) number of bytes at locations numbered from 0 upwards. And let’s use the terminology that byte x on disk Y is referred to as Y[x]. A RAID algorithm calculates its parity by saying P[x] = A[x] XOR B[x] XOR C[x] … and so on for however many disks are in the system. It stores this parity somewhere, so that should one disk fail, its content can be calculated by simply dropping the parity data into the algorithm.

In RAID 3 systems, the parity information is stored on a dedicated parity disk:

Disk ADisk BDisk C
DataDataParity
DataDataParity
DataDataParity
DataDataParity

You have as many disks as you wish, plus a dedicated unit that only holds the parity information. It's a nice easy algorithm, but there are performance issues – simply because any time data is written to any disk in the system, the parity disk is going to get a hammering as the parity data is written.

RAID 5 solves this problem by simply spreading the parity data across all disks in a logical manner. So our three-disk example would look something like this:

Disk ADisk BDisk C
DataDataParity
DataParityData
ParityDataData
DataDataParity

In this case, then, the parity data for the first block is written to disk C, the next to disk B, the next to disk A, then it starts again at disk C. This gives a much more balanced loading for the disks themselves. There's a term for the illustrated pattern, incidentally – it’s called “left-symmetric”. It'll come as no surprise that right-symmetric parity storage is similar except the sequence is A-B-C-A-B-C and so on, instead of C-B-A-C-B-A.

RAID 5, as you probably know, protects against failure of at most one disk. There are more complicated algorithms that store more parity data, calculated in a more complex manner than our simple XOR, and which allow for up to two disks failing at once (this is how RAID 6 operates). To cater for more than two failed disks, though, you need to start pairing disks directly in RAID 1 style (there’s a flavour of RAID called RAID 10 that takes a number of RAID 1 disk pairs and then stripes data across the pairs in RAID 0 style) because the overhead of calculating the parity algorithm becomes prohibitive to performance once you go past two redundant disks.