The DCX (datacentre switch), announced today, is step two on Brocade's road to re-invent itself as far more than a SAN switch and director vendor.
It's been a smooth ride so far. Brocade has said that SAN directors and its file area networks (FANs) need consolidating and extending so that datacentres can handle the new era of hundreds of virtual servers and Ethernet everywhere.
In this the DCX is a hub in the middle of a datacentre using datacentre-class Ethernet as the carrier of multiple protocols. Of course it supports existing Fibre Channel and mainframe FICON but the future is Ethernet. The fabric extends across metropolitan and continental distances and supports virtualised file access as well as storage area network (SAN) block access.
Now DCX has arrived and it is bigger than we thought. The thing is that virtual servers have dramatically uprated the number of servers accessing services across a datacentre fabric. Where there might have been 75 physical servers before there could be 750 now and 7,500 in a couple of year's time.
The effect of a tenfold and then hundredfold increase in server numbers anting resource access through the fabric means its bandwidth has to be scaled up and its reliability increased. That is why the DCS has switching capability in the terabits/second area.
The 48000 went past five nines reliability and Brocade is hoping that the DCX might achieve six nines - 99.9999 percent uptime reliability - when real customer experiences can be measured.
Another aspect of this is electricity use. On a per port basis the DCX appears to be very much more power-efficient than any other director or datacentre-class switch.
We described Neptune as having 768 ports and 10GBit/s Fibre Channel (FC). Wrong! Too few ports, as it has 768 universal ports and 128 inter-chassis links (ICLs), and 8GBit/s FC instead of 10. That's a total of 896 8Gbit/s ports with a massive 12 terabits of bandwidth.
We also thought that the logic of DCF means that Brocade sees the DCX also acting as a hub for general network access to and from the datacentre. This logic still holds - although the three customer scenarios that Brocade pictures the DCX in, make no mention of it.
We also suggested it could be used for server clustering. That is not mentioned and the use of 8Gbit/s Ethernet doesn't support the idea. The multi-protocol support encompasses 1, 2, 4, 8, 10 Gbit/sec Fibre Channel and FICON, Gigabit Ethernet with FCIP and iSCSI, and 10GbE meaning DCE, FCoE, HPC, and iSCSI and FCIP again.
Brocade gives out three main use cases as examples of where the DCX can fit. One example is a datacentre with a large number of virtual servers hooking up to a DCX-fronted remote datacentre over a dense wave division multiplexing (DWDM) link, and providing access to storage resources for the VMs. This is being tested by a North American bank which is going to add FICON mainframe connects to the DCX backbone and start encrypting data at rest.
The ICL capability gives more scalability as DCX can be linked to DCX.
A second scenario involves extending a European retailer's SANs with McDATA 6140 directors migrated to DCXs with two datacentres linked, again by DWDM, over a metropolitan area. There is bandwidth for expansion and a similar possibility for data encryption carried out by the fabric and not storage array controllers or servers.
The third use case is one of network consolidation over DCE (datacentre Ethernet) with servers, both physical and virtual, given access to FC SAN resources, backup and disaster recovery, and a high-performance computing cluster. The only physical interconnect needed by the accessing servers is Ethernet. Fibre Channel recedes back behind the DCX, as does the HPC node interconnect.
The point of storage management intelligence
The DCX continues Brocade's 48000 Director role as a locale for storage management intelligence. It supports migration Brocade's Data Migration Manager (DMM), virtualisation via EMC's Invista, and also Fujitsu ETERNUS, continuous replication with EMC's RecoverPoint, and encryption.
However there is little active support from EMC in one of the key aspects here as both IBM with its SVC product and Hitachi Data Systems with its UPS appear to have many more installations of storage management applications running on a fabric-attached box (IBM) or intelligent storage array controller (HDS) than a fabric director.
InVista is just not receiving the technology development and marketing drive that one might expect for a core technology from EMC. This malaise affects both Brocade and its main competitor in this space, Cisco.
With the DCX Brocade has provided a roadmap into the future for existing Silkworm 48000 and McDATA 6140 director customers. Where now they are seeing the possibility of SAN access congestion through the rise of virtual servers they can now see a large increase in bandwidth headroom as well as the means to extend FC SAN resources to iSCSI-using FCOE.
There is also the prospect of adding file access services through the DCX and of consolidating all main data centre traffic onto Ethernet with other protocols like FC and FICON reverting to relative niche protocols visible only to dependent fabric elements, such as mainframes and FC storage arrays, and not to anyone else.
With the DCX fulfilling more of a hub role, its use as a focal point for the execution of storage management services could be strengthened. Brocade, and Cisco, will need to preach against the active opposition of IBM and HDS and without much help from EMC to gain traction with this concept.
Lastly the DCX is an example of what can be done with bladed architectures. When, for example, 16GBit/s FC becomes available then new blades can likely be built and slotted in. The Fabric Application Engine blades that run apps such as encryption can readily be upgraded to support faster processors and more RAM.
The DCX is the first strike in a new datacentre marketing struggle. It would seem that only one other supplier has the resources and technology resources to answer it - Cisco. It now knows what it needs to do to answer Brocade and so we await its move in the datacentre fabric chess game.