In the beginning there was Fibre Channel (FC), and it was good. If you wanted a true SAN -- versus shared direct-attached SCSI storage -- FC is what you got. But FC was terribly expensive, requiring dedicated switches and host bus adapters, and it was difficult to support in geographically distributed environments. Then, around six or seven years ago, iSCSI hit the SMB market in a big way and slowly began its climb into the enterprise.

The intervening time has seen a lot of ill-informed wrangling about which one is better. Sometimes, the iSCSI-vs.-FC debate has reached the level of a religious war.

 

This battle been a result of two main factors: First, the storage market was split between big incumbent storage vendors who had made a heavy investment in FC marketing against younger vendors with low-cost, iSCSI-only offerings. Second, admins tend to like what they know and distrust what they don't. If you've run FC SANs for years, you are likely to believe that iSCSI is a slow, unreliable architecture and would sooner die than run a critical service on it. If you've run iSCSI SANs, you probably think FC SANs are massively expensive and a bear to set up and manage. Neither is entirely true.

Now that we're about a year down the pike after the ratification of the FCoE (FC over Ethernet) standard, things aren't much better. Many buyers still don't understand the differences between the iSCSI and Fiber Channel standards. Though the topic could easily fill a book, here's a quick rundown.

The fundamentals of FC

FC is a dedicated storage networking architecture that was standardised in 1994. Today, it is generally implemented with dedicated HBAs (host bus adapters) and switches - which is the main reason FC is considered more expensive than other storage networking technologies.

As for performance, it's hard to beat the low latency and high throughput of FC, because FC was built from the ground up to handle storage traffic. The processing cycles required to generate and interpret FCP (Fibre Channel protocol) frames are offloaded entirely to dedicated low-latency HBAs. This frees the server's CPU to handle applications rather than talk to storage.

FC is available in 1Gbps, 2Gbps, 4Gbps, 8Gbps, 10Gbps, and 20Gbps speeds. Switches and devices that support 1Gbps, 2Gbps, 4Gbps, and 8Gbps speeds are generally backward compatible with their slower brethren, while the 10Gbps and 20Gbps devices are not, due to the fact that they use a different frame encoding mechanism (these two are generally used for interswitch links).

In addition, FCP is also optimised to handle storage traffic. Unlike protocols that run on top of TCP/IP, FCP is a significantly thinner, single-purpose protocol that generally results in a lower switching latency. It also includes a built-in flow control mechanism that ensures data isn't sent to a device (either storage or server) that isn't ready to accept it. In my experience, you can't achieve the same low interconnect latency with any other storage protocol in existence today.

Yet FC and FCP have drawbacks - and not just high cost. One is that supporting storage interconnectivity over long distances can be expensive. If you want to configure replication to a secondary array at a remote site, either you're lucky enough to afford dark fibre (if it's available) or you'll need to purchase expensive FCIP distance gateways.

In addition, managing a FC infrastructure requires a specialised skill set, which may make administrator experience an issue. For example, FC zoning makes heavy use of long hexadecimal World Wide Node and Port names (similar to MAC addresses in Ethernet), which can be a pain to manage if frequent changes are made to the fabric.