The DCX (datacentre switch), announced today, is step two on Brocade's road to re-invent itself as far more than a SAN switch and director vendor.

It's been a smooth ride so far. Brocade has said that SAN directors and its file area networks (FANs) need consolidating and extending so that datacentres can handle the new era of hundreds of virtual servers and Ethernet everywhere.

In October early stories came out describing Neptune. Last year also saw the introduction of its DCF - datacentre fabric - architecture.

In this the DCX is a hub in the middle of a datacentre using datacentre-class Ethernet as the carrier of multiple protocols. Of course it supports existing Fibre Channel and mainframe FICON but the future is Ethernet. The fabric extends across metropolitan and continental distances and supports virtualised file access as well as storage area network (SAN) block access.

Now DCX has arrived and it is bigger than we thought. The thing is that virtual servers have dramatically uprated the number of servers accessing services across a datacentre fabric. Where there might have been 75 physical servers before there could be 750 now and 7,500 in a couple of year's time.

The effect of a tenfold and then hundredfold increase in server numbers anting resource access through the fabric means its bandwidth has to be scaled up and its reliability increased. That is why the DCS has switching capability in the terabits/second area.

The 48000 went past five nines reliability and Brocade is hoping that the DCX might achieve six nines - 99.9999 percent uptime reliability - when real customer experiences can be measured.

Another aspect of this is electricity use. On a per port basis the DCX appears to be very much more power-efficient than any other director or datacentre-class switch.

Misapprehensions

We described Neptune as having 768 ports and 10GBit/s Fibre Channel (FC). Wrong! Too few ports, as it has 768 universal ports and 128 inter-chassis links (ICLs), and 8GBit/s FC instead of 10. That's a total of 896 8Gbit/s ports with a massive 12 terabits of bandwidth.

We also thought that the logic of DCF means that Brocade sees the DCX also acting as a hub for general network access to and from the datacentre. This logic still holds - although the three customer scenarios that Brocade pictures the DCX in, make no mention of it.

We also suggested it could be used for server clustering. That is not mentioned and the use of 8Gbit/s Ethernet doesn't support the idea. The multi-protocol support encompasses 1, 2, 4, 8, 10 Gbit/sec Fibre Channel and FICON, Gigabit Ethernet with FCIP and iSCSI, and 10GbE meaning DCE, FCoE, HPC, and iSCSI and FCIP again.