It's not often that Ethernet is on the outside of an emerging network technology market. But in linking some data centre equipment to high-speed pipes, Ethernet sometimes has its nose pressed to the glass door, looking in.
For data centre network managers, server interconnect technology falls into two distinct camps. For most, Ethernet, the world standard for networked computers, is how Windows, Linux, Unix and mainframes are plugged in and accessed. But in the rarefied realm of high-performance data centre clustering, technologies such as Infiniband and some niche, proprietary interconnect technologies, such as Myricom's Myrinet, have a strong hold.
Over the past several years, Infiniband switches have emerged as an alternative for some users. Makers such as Voltaire and Infinicon came to market with high-speed clustering switches that connect servers with specialised host bus adapters (HBA). These systems can provide as much as 30Gbit/s of throughput, with latency as low as the sub-200ns range. By comparison, latency in standard Ethernet gear is measured in milliseconds, or one-thousandth of a second, rather than nanoseconds, which are one-billionth of a second. This server-to-switch technology was so attractive that Cisco purchased Infiniband switch start-up TopSpin a little more than a year ago for $250 million.
A need for speed, and more
"Ethernet is a good, versatile technology that can handle almost anything," says Patrick Guay, vice president of marketing for Voltaire. "But Ethernet never had to address the levels of traffic efficiency and latency" required in clustered computer systems, storage networking and high-speed server interconnects, he adds.
"It's not that there is no place for 10G Ethernet in data centres," Guay says. "There is just a certain subset of customers who need more than what Ethernet and IP offer."
This was the case at Mississippi State University's Engineering Research Centre (ERC), which runs several large Linux clusters used in engineering simulations for defence, medical and automotive industry research, among other areas. The ERC's Maverick is a 384-processor Linux cluster connected by Voltaire Infiniband products. Voltaire's Intros 96-port Infiniband switch is used to connect the diskless processor nodes, which access storage - and even operating system boot images - over the Infiniband links.
This lets Roger Smith, network manager at the ERC, set up cluster configurations on the fly; however, many processors needed for a task can be called up quickly.
Such a set-up requires extremely low latency, as the processors are pulling Linux operating system images over the Infiniband links, instead of through a local hard drive. Also, processes shared in RAM among the Linux nodes all run through the Voltaire switch.
"Ethernet was just not ready for prime time, to get to the low-latency needs in some data centers" over the last few years, says Steve Garrison, marketing director for Force10 Networks, which makes high-speed Ethernet data centre switches.
The latency of store-and-forward Ethernet technology is imperceptible for most LAN users - in the low 100ms range. But in data centres, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results.
"When you get into application-layer clustering, milliseconds of latency can have an impact on performance," Garrison says. This forced many data centre network designers to look beyond Ethernet for connectivity options.
"The good thing about Infiniband is that it has gotten Ethernet off its butt and forced the Ethernet market to rethink itself and make itself better," Garrison says.
Ethernet's virtual twins
It's become harder to tell standard Ethernet and high-speed interconnect technologies apart when comparing such metrics as data throughput, latency and fault tolerance, industry experts say. Several recent developments have led to this. One is the emergence of 10G Ethernet in server adapters; network interface card (NIC) makers such as Solarflare, Neterion and Chelsio have cards that can pump as much as 8G to 10Gbit/s of data into and out of a box with the latest PCI-X server bus technology.
Recent advancements in Ethernet switching components and chipsets have narrowed the gap between Ethernet and Infiniband, with some LAN gear getting latency down to as little as 300ns. Also, development of Ethernet-based RDMA (Remote Direct Memory Access) - most notably, the iWARP effort - has developed Ethernet gear that can bypass network stacks and bus hardware and push data directly into server memory. Improvements in basic Gigabit Ethernet and 10G chipsets have also brought latency down to the microsecond level, nearly matching Infiniband and other proprietary HBA interconnect technologies.
The type of high-performance computing applications used at Mississippi State also has been the purview of specialised interconnects for a long time. One brand synonymous with clustering - at least in the high-performance computing world - is Myricom. The company's proprietary fibre and copper interconnects have been in large supercomputers for years, connecting processors directly over the company's own protocols. This allows for around 2 microsec of latency in node-to-node communications, and up to 20Gbit/s - more than 16 times faster than 10G Ethernet - of bandwidth. But even Myricom says Ethernet's move in the high-performance data centre is irresistible.
"A great majority of even HPC applications are not sensitive to the differences in latency between Myrinet connections on one side and Ethernet on the other side," says Charles Seitz, CEO of Myricom. The company is in the 10G Ethernet NIC market, having released Fibre-based adapters, which can run both 10G Ethernet and Myrinet protocols. This evolution was caused by customer demand for more interoperability with the Myricom gear.
"What do people mean when they say interoperability? They mean its interoperability with Ethernet," Seitz says.
The widespread expertise in building and troubleshooting Ethernet networks, and universal interoperability makes it the better data centre connectivity technology in the long run, others say.
"Ethernet does not require a certification," says Douglas Gourlay, director of product management for Cisco's Data Centre Business Unit, which includes products such as the Fibre Channel MDS storage switch, Infiniband (from the TopSpin acquisition), and Gigabit and 10G Ethernet products.
"With Fibre Channel, you had multiple vendors building multiple products, so the storage vendors took it upon themselves to create a certification standard for interoperability," Gourlay says. "Now users won't deploy products that are not certified by that specification."
Gourlay adds that the industry is not at the point where Infiniband technology, or other high-performance computing interconnect gear, has a common certification standard, or has proven interoperability among multivendor products.
"You can probably bet that you can't build a network today with one of the narrowly focused high-performance computer networking technologies from multiple vendors."