Emulex has demonstrated Fibre Channel running over Ethernet (FCoE) at Storage Networking World in Frankfurt.

This is an opening public salvo in a revolution that seems to have no enemies at all. HBA vendors, SAN switch vendors, server vendors, storage array vendors; they all welcome FCoE.

New converged enhanced Ethernet

The new way of carrying Fibre Channel has come about because a datacentre-class converged enhanced Ethernet (CEE) standard is being developed. It will operate at 10 Gbit/s and be more reliable than current Ethernet. CEE will not drop packets or frames of data, being termed loss-less, and will have better flow control.

Currently the way of using Ethernet for storage networking is to layer TCP/IP onto the Ethernet protocols, as iSCSI does, and compensate for its 'lossy' characteristic by having TCP/IP retransmit packets that fail to travel from the source to the target. With CEE there is no need for this and the Fibre Channel protocol can be mapped directly onto Ethernet for the first time.

The flow control can be viewed as being analogous to flow control on a motorway (highway) with traffic lights controlling traffic on the on-ramps.

Joe Gervais, a senior marketing director at Emulex, expects an FCoE standard to come into being in 12 to 18 months time: "It's moving very, very quickly." Both Cisco and Brocade were invited to submit proposals to IEEE for a standard, 802.1au. Tom Hammond-Doel, vice chair of the Fibre Channel Industry Association (FCIA) said there was a 90 percent overlap between them. The standard is being created by working out the other ten percent, much of it relating to the precise frame format.

Hammond-Doel said that the FCIA and the T11 standards committee reached a consensus on the frame format in August this year. The FCI expects a stable FCoE standard in the first half of 2008 with OEM seed qualifications starting in the second half of the year. Shippable FCoE product can be expected in 2009.

He points out that FCoE is not routable; there is no TCP/IP which could accomplish this, and that, therefore, FCoE is restricted to Ethernet LANs. To send Fibre Channel long distance you would need FCIP, Fibre Channel over IP, which complements FCoE.

Emulex demo

At SNW in Frankfurt Emulex showed FCoE working by sending Fibre Channel commands from a host server to a storage array across an Ethernet link. Currently a user wishing to have a server send out Fibre Channel and Ethernet messages would need to to have two interfaces, a host bus adapter (HBA) for Fibre Channel, and a NIC (network interface card) for Ethernet.

Emulex fitted the server with a converged card called a Converged Network Adapter (CNA) which led out to 10 gigabit Ethernet. It is based on technology from Emulex's Arohi acquisition of a year or so ago, with additional technology from a private company, Nuova Systems, interfacing the Arohi technology part of the CNA to Ethernet.

Cisco has an 80 percent stake in Nuova Systems, which has yet to announce a product. The CNA carries both Fibre Channel traffic to a storage target and general Ethernet traffic for the LAN. To do this it has to filter incoming and outgoing traffic and send it through the card to the appropriate external or internal port.

Gervais points out that FCoE is not a computationally intensive protocol: "unlike SCSI with its TCP/IP overhead." With FCoE there is direct mapping from FC to Ethernet and thus the CNA does not do much processing, nor is there a need for the equivalent of a TCP/IP offload card (TOE). He adds that: "FCoE is storage-driven. There is no general networking need for it."

For VMware and Emulex users, Emulex' virtual HBA technology will be carried through to the CNA.

Gervais also points out that CEE could be used as a cluster inter-connect in datacentres.


Gervais says that, contrary to the traditional Ethernet evolutionary step of increasing speed in tenfold jumps, the next Ethernet speed standard being worked on is 40 Gbit/s. This is the next Ethernet speed step for server connects. H expects the next Ethernet backbone step to be to 100Gbit/s. Over the long term that would come to servers.

In the Fibre Channel world we are in the midst of an 8Gbit/s adoption at the network edge, with 10Gbit/s being used for some inter-switch links (ISL). Hammond-Doel expecs the next FC speed standard to be 16Git/s, perhaps around 2011, followed by 32Gbit/s. Storage network speed steps double, he says, unlike general networking speed steps which have traditionally jumped tenfold as we have seen.

ISL FC speeds could double up from 10Gbit/s to 20 and then 40Gbit/s instead of falling away and having aggregated edge links used for ISLs.

For FCoE the actual FC speed standard does not matter, the link speed being dependent upon the Ethernet carrier.

FCoE use and adoption

It is likely that FCoE will be a strategic data centre purchase, according to Gervais. We might expect that the use of iSCSI for storage networking in smaller and medium enterprises will continue until CNAs become affordable. The point is that FC only exists as a layered protocol on Ethernet and it is clearly possible to have a seemingly Fibre Channel-based storage area network (SAN) with no actual SAN fabric; the infrastructure of FC cables and switches, at all.

In this case there would be no need for FC-based management products to control and monitor FC ports in the fabric. The complex and costly FC infrastructure and its management simply goes away. The SAN management tools remain and he use of Fibre Channel disks and possible Fibre Channel networks inside storage devices would remain as well.

This suggests that FCoE will initially be used in datacentres to radically increase the number of servers capable of connecting to a SAN. It will do there what iSCSI has generally failed to do and expand the FC SAN's server reach. The cost of the CNA needs to be affordable for this to happen as, conceivably, hundreds of blade servers could need connecting via FCOE to existing SANs. In theory SAN connectivity could be extendd to the entire server state in a datacentre by standardising on a 10Gbit/s CEE.

From the point of view of power and cooling we might note that an FCoE-based SAN fabric doesn't need to involve FC switches and there might be a consequent decrease in datacentre power and cooling costs.

If and when FCoE and CEE costs decrease then FCoE could replace iSCSI with a consequent release of iSCSI's TCP/IP processing cycles to application use.

The beauty of FCoE is that it enables the seamless extension of existing FC SANs. It also helps encourage the gradual transition to an all-Ethernet world in the datacentre.

It should be pointed out that Brocade is basing its datacentre fabric and datacentre backbone switch ideas heavily on FCoE. QLogic, the other leading HBA vendor, is also aboard the FCoE train. There don't appear to be any supplier hold-outs.

FCoE adoption does not require radical steps. However the nature of the underlying Ethernet upgrade and the ability to seamlessly extend a FC SAN to vastly more servers without the expense of additional FC ports and switches on the one hand, or iSCSI processing cycles on the other, is revolutionary. This is one revolution though, where all the affected parties are saying: "Bring it on."