Voltaire has announced a component for its Grid Director switches to help users of InfiniBand-connected servers connect more easily to 10G Ethernet-based LANs. The company also claimed that the mixed-media InfiniBand-10G Ethernet line cards also are less expensive than tying together separate InfiniBand and 10G Ethernet switches.

The new line cards for Voltaire's 288-port and 96-port Grid Director have 22 10Gbit/s 4X InfiniBand ports and two 10G Ethernet ports.The company says that in a maximized InfiniBand-10G Ethernet configuration, the new line cards could connect a maximum number of servers into an InfiniBand switch cluster, with 24 10G Ethernet ports for connecting to LAN backbone switches.

InfiniBand lets machines connect and share data with extremely low - in the sub-microsecond range - latency, whereas Ethernet latencies are measured in the tens or hundreds of microseconds.

Voltaire's Grid Director switches provide high-speed interconnectivity for clusters of servers, often in high-performance computing in scientific applications. Oil and gas exploration and financial services companies also are users that cluster together large amounts of machines for complex computations or to sift through large databases.

Voltaire says its InfiniBand-10G Ethernet module has a new chip design that builds 10G Ethernet and InfiniBand routing and switching into a single, application-specific integrated circuit, letting data flow among the two network protocols with submicrosecond latency. Hardware in the blade provides Layer 2 to Layer 4 switching, as well as TCP/IP offload technology for accelerating IP traffic flows, the company says.

Voltaire competes with InfiniBand switch maker InfiniCon, as well as with Cisco 's InfiniBand switches based on technology from TopSpin, which it acquired last year.

At Mississippi State University's Engineering Research Center (ERC), a Linux cluster of 384 processors is connected by a Voltaire Grid Director. This distributed supercomputer is used as a powerful calculator for research in computational fluid dynamics, which involves processing millions of computations. Bridging this InfiniBand cluster with a 10G Ethernet LAN could have some potential, says Roger Smith, senor computer specialist at ERC. In applications where the InfiniBand cluster must access data from an IP-based network, the InfiniBand-to-Ethernet blade could be helpful he says.

"There are some applications where being able to move data in and out [of the InfiniBand cluster] is useful," Smith says. "I usually like to keep my InfiniBand and [IP/Ethernet] networks separate," he says, however. Infiniband and Ethernet have a specific purpose, Smith says. At the moment, he doesn't see a need to mix them: "Why waste my high-performance computing network on something that may not need such high performance?" he asks.
\