When a Fibre Channel (FC) link is in place buffers exist and data transmission uses these buffers. If the latency of the link gets too high then the link ceases to operate. Over normal data centre, or campus-type distances, this is not a problem. However, when the link increases to hundreds of kilometres, more latency becomes an increasing problem and eventually stops the protocol working and the link breaks.

Effectively FC links are restricted to the distance they can operate over. In these days of consciousness concerning disaster recovery and business continuity there is a perceived and growing need for 1500 kilometre links or even greater, cross-continent distances, and high-speed data replication across them. East coast US SAN needs to talk to West coast US SAN. But Fibre Channel can’t do this.

Instead a bridging technology such as Fibre Channel over IP has to be used. Why can’t Fibre Channel be extended? We have to look at the notion of buffer credits.

Buffer credits
When Fibre Channel transmits data it is put into a frame. These can be between 512 bytes to 2KB; it depends upon the HBA and the target FC device. Frames can be joined together into sequences for large block transfers. Up to 128MB of data can be sent across the link using just one command by combining sequences.

At the HBA the data to be sent is built up into the frame or frame collection and then sent across the link where the target FC device receives it into a buffer area. It then tells the upstream application – storage device or server application – that the data is ready. Meanwhile the FC sender transmits another data block, and then another. There has to be a method to prevent the target’s buffer area overflowing, and that is the idea of buffer credits.

When the target buffer is full then it gives no buffer credits to the sending HBA. When there is space for one more block it extends one buffer credit to the sending HBA. Sending and target FC devices keep track of the target’s buffer space by means of buffer credits. It is a mechanism that governs the maximum amount of data that can be ‘in flight’ at any one time. It throttles the transmission rate so that it adapts to the target’s ability to handle the received data.

Buffer credit amounts are communicated at fabric login. It is a hardware mechanism to ensure guaranteed delivery and avoids the need for software error correction. One buffer credit enables the sending HBA to transmit one frame, generally 1 or 2KB, before a ‘receiver-ready’ acknowledgement signal is received. If the target’s buffer has space sufficient for 100 buffer credits then the sender can transmit 100 frames before having to wait for a receiver-ready ack.

Buffer credits also have an influence on the link distance. Take a 1Gbit/s optical fibre. Light travelling across it has a latency of 5 nanoseconds per metre, the time the signal takes to cross the metre distance. It takes data 50 milliseconds to traverse a 10km optical cable and another 50 milliseconds for the acknowledgement to come back; that’s 100 milliseconds for the round trip.

A 2KB FC frame has a transit duration of only 20 milliseconds; it’s using just a fifth of the bandwidth. So five 2KB credits are needed before the 10km link is full, meaning a 10KB buffer with a 2KB frame size. A longer 100km cable will need 100 buffer credits to fill it (100KB buffer). A 1000km cable will need 1000 buffer credits (1MB buffer)

Normally a Fibre Channel link has up to 100 buffer credits. We can use a rule of thumb that one credit enables a one kilometre link distance. So 100 credits means a maximum link distance of 100 km.

A doubling of the bandwidth from, say, 1Gbit/s to 2Gbit/s, will require a doubling of the buffer credits to fill the link’s bandwidth, as the transit time is cut in half. Thus a 100km 2Gbit/s FC link needs 200 buffer credits. A 1000km 4Gbit/s link will need 4000 buffer credits and a 4MB buffer.

So we need bigger and bigger buffers at either end of the link to sustain high bandwidth rates and bigger buffers again to overcome distance limitations.

Write Acceleration
Cisco engineers have devised a technique they call FCIP Write Acceleration’ which effectively increases the buffer credit total from Cisco’s original 256 to 3,500. The company states, “Cisco MDS 9000 Family full line-rate ports provide 256 buffer credits standard. With Extended Credits, up to 3500 credits can be assigned to a single Fibre Channel port within a group of 4 Fibre Channel ports on the Multiprotocol Services Module.” The FCIP Write Acceleration functionality uses internal timers to free up resources.

Using our rule of thumb above that translates to a 3,500 km link distance. That’s around 1,900 miles. It’s not quite cross-continent in scope but it is a darn sight further than 100 km.

How does it work? The sending and receiving FC devices don’t actually have larger buffers. The Cisco MDS 9000 IP Storage Services Module increases WAN throughput when Write Acceleration is enabled, by reducing the latency of the command to transfer ready acknowledgement.

From the MDS 9000 configuration notes we learn that with Write Acceleration, “some data sent by the host is queued on the target before the target issues a Transfer Ready. This way the actual write operation may be done in less time than the write operation without the write acceleration feature being enabled.”

Effectively the round trip latency is halved. (Instead of the receiving FC device transmitting the acks back to the sending HBA, the MDS 9000 does it by proxy.)

One of the issues with SANs is that standards in this area don’t exist and that HBAs on the one hand, and switches and directors on the other, are produced by different suppliers. Life might be simpler, and possibly better, if one supplier produced end-to-end FC gear.