The benefits of SANs (storage area networks) sound tantalizing: efficient centralized storage, high performance, and rock-solid reliability. But SANs are too expensive if you're a small business. Or are they? Let's do the numbers - you may be surprised.

Take a typical, good-sized small business running 20 servers with an average investment per server of approximately $5,000. The ever popular HP Proliant DL380 G6 or Dell PowerEdge R710 falls in that price range. The flexibility to jam two quad-core processors, tons of RAM, and eight or more hard disks into a neat little space in your rack makes these servers great building blocks for an enterprise of just about any size.

For the purposes of our example, let's say each server has a pair of 300GB SAS disks in a RAID1 set for redundancy. That delivers about 278GB of usable, redundant capacity per server and will kick out around 300 IOPS of transactional performance -- enough to run a Microsoft Exchange implementation for a couple hundred people without bogging down the disks down too much.

Of course, some servers will need more capacity or performance, and some will need less. That's precisely the problem that plagues DAS (direct-attached storage) environments. Using DAS, each server is a storage island unto itself. If Server A is a domain controller and needs only 20GB of usable capacity and Server B is a file server that needs 600GB of usable capacity, they can't natively borrow capacity from one another. Moreover, if I have a 50GB SQL database that backs a critical line-of-business application, I might need five or six disks just to provide enough transactional performance to run it -- and end up buying maybe 10 times the amount of capacity. The result: massive amounts of inefficiency and over-specification, both characteristics of DAS environments.

If it's true that we're overbuying, why can't we just purchase less? That's easier said than done. First off, there's a floor to the size of SAS disk that we can buy; the smallest commonly available is 146GB (73GB disks are still around, but not for long). Thus, our domain controller is massively overspecified from the get-go. Sure, you could buy SATA disks -- inexpensive options in both HP and Dell's product lines -- but neither manufacturer will warrant these disks for more than one year. Add the fact that 7,200-rpm SATA disks provide less than half the transactional performance of a 15,000-rpm SAS drives and you can see why SATA disks are rarely a great choice.

In fact, we're spending a pretty penny on all this DAS. Each server's pair of redundant disks is around $1,300 of the $5,000 overall hardware cost or $26,000 across the entire group. That's more than a quarter of the cost of the whole room of servers. Even if you find ways to shave this by using smaller disks, and sub 10,000-rpm SAS disks for 15,000-rpm models, you're still going to spend around $20,000 for 20 servers' worth of storage. With RAID1 mirror sets, we're talking between $3.60 and $4.60 per gigabyte of usable storage, much of which we've already determined we're not going to be able to use. In reality, that per-gigabyte number ends up being perhaps two to three times that.

It just so happens that number is roughly equivalent to the cost per gigabyte of a solid SAN. This brings us to a typical SAN example: a dual-controller Dell EqualLogic Peer Storage 4000E with 16 SATA disks, at 500GB and 7,200 rpm each. That will yield about 5.4TB of usable space and provide somewhere in the neighbourhood of 1,200 IOPS of transactional performance.

Sure, those specs seem to be worse than the total feeds and speeds off the DAS scenario. But the bottom line is that we had to buy all of that SAS performance to be able to cover the individual, transient performance and capacity requirements of 20 little islands. If we combine them into a single pool, the provisioning inefficiency drains away and we're left buying what we actually need for the entire organization in terms of capacity and performance. If we need 20GB for a server? No problem -- give it 20GB. In fact, if it actually needed 10GB right this second, give it that and increase it to 20GB on the fly four months from now when it actually needs it.

Better still, we're able to take advantage of all of the cool features SANs bring to the table: application-aware snapshots, direct-from-SAN backups, highly reliable dual controllers, lots of high-speed cache, usage trending, and the prospect of easy, block-level, site-to-site replication down the road, not to mention that shared storage boasts tremendous high-availability features, which just come along for the ride. In the end, we're trading wasted performance and capacity for a feature set that is impossible to find using DAS.

Sounds like a pretty good deal to me. Before you dismiss SAN technology as an expensive indulgence, do the math and see what it would really cost. It may fit your environment better than you think.