Infiniband was going to be the interconnect for all seasons, for lots of reasons, mostly high bandwidth. It was going to replace PC and server busses, function as a cluster interconnect, feature in SMP and parallel processing systems and be used in storage systems also. Alas, a unifying data center inter-connect it was not to be. Chip complexity, development delays, migration problems and expense led to a deflation of Infiniband's expectations and the technology disappeared off our radar screens a couple of years ago.

However, it didn't completely go away and latterly shows signs, like a dormant volcano, of coming to life.

Recent announcements include IBM agreeing to integrate and resell Topspin Communication's Infiniband gear across its server and storage product line for the high-performance computing and database clustering markets.

Topspin asserts that InfiniBand is capable of remaking mainstream business computing by delivering on the promise of a single, unifying I/O fabric for the data centre. Well, yes, Topspin can always hope.

Clusters' Last Stand
Infiniband does look to have a role in cluster computing. MCNC is working with the IBM and Topspin kit.. "MCNC is very pleased to be working with Topspin and IBM in building out our 64-node cluster. Our initial performance data indicates significant improvement using Topspin's Server Switch, much faster than Gig-E and most proprietary interconnects. Especially key is the low latency and high bandwidth that the solution delivers." said Steve Woods, an MCNC principal systems analyst.

IBM doesn't see Infiniband appearing in the general server interconnect market. GigE will do for that.

NEC has also selected Topspin as its InfiniBand supplier for high performance computing solutions and will resell Topspin InfiniBand switching and I/O gateway products. NEC and Topspin are working together on several large high performance cluster projects in Japan.

And clustering is it for Infinicon Systems; it announced in September that its InfinIO Switching Series has been selected by Fujitsu for use in building one of the world’s largest supercomputer clusters.

Infiniband Trade Association
The Infiniband Trade Association (IBTA) has been relatively quiet. It made just five announcements in the whole of last year. There are 48 members of which eleven are sponsoring members. Last year, Infinband supplier Voltaire joined the other IBTA steering members that include vendors such as Agilent Technologies, Dell, Hewlett-Packard, IBM, Infinicon Systems, Infiniswitch, Intel, Mellanox and Sun Microsystems. To have ten of your 48 members on a steering committee might strike one as a little top heavy.

The first version of the specification for the technology was completed in October 2000. Since then, more than 70 companies have announced plans to bring InfiniBand products to market, with delivery expected to begin early next year.

That's been Infiniband's problem though. Delivery has always been about to ramp, only it hasn't.

The IBTA states, "The Internet is creating an increased demand for server computer I/O subsystem performance, scalability, reliability and flexibility. A shift to a switched fabric-based I/O architecture will enable industry participants to meet this increased demand. … (The) InfiniBand Trade Association represents the industry's choice for developing I/O technologies that will keep pace with the demands of the Internet age."

But we have Fibre Channel at 2Gbps, with 4 Gbps due and 10Gbps following, and GigE with 10GigE coming. Historically, Ethernet has swept all before it. Can Infiniband's capacity and low latency enable it to prevail as a standard cluster, and grid computing, data center connect technology?

“Infiniband has receded,” says Illuminata Inc. principal analyst Jonathan Eunice. “It’s still a player, it’s just not what it was originally thought to be. But it still has possibilities in HPC [high performance computing] and embedded systems. Infiniband makes a pretty good foundation for those types of applications.”

Embedded computing
SBS Technologies sees embedded computing as the Infiniband sweet spot and it is offering new products to take advantage of that market. It's added two new Infiniband products with more on tap for next year.

SBS offers Fibre Channel and Gigabit Ethernet embedded computing products, too, but the company sees demand for Infiniband, albeit primarily in the military and high computing segments. “We’re seeing a lot more requests for Infiniband over the last six months,” SBS Infiniband technologist Steve Cook says. “Demand has been growing steadily."

Cook says roughly half of the company’s Infiniband sales are in embedded systems, with about 40 percent in HPC.

Good Future
Richard Ptak of Ptak Noel Associates thinks Infiniband is being given a second chance. "I'd say that Infiniband has its best days before it. It was hit hard by the slow down as prospective champions and vendor-users focused on more pressing (read: revenue/survival) problems. But the promise and attraction of a very high-speed, scalable, robust interconnect has become even more attractive with the focus and interest on utility, grid, On demand, adaptive, whatever-you-call-it computing."

"Taking just IBM as an example, listening to their product strategies, research directions and implementation plans should remove any doubt of their plans to push intelligence and performance into the hardware. I wouldn't bet on any single technology as the sole survivor mechanism to meet a functional need but Inifiniband seems to me to have increasingly visible and active support as the connector technology of choice for realizing automated intelligent infrastructure.

Conclusion
SBS asserts that Infiniband isn’t going away, no matter how many times pundits kicked it to the curb after its disappointing start. But neither are GigE and FIbre Channel interconnects. It looks like it's going to be a mixed interconnect world and not, however much the IBTA might wish, one in which "a shift to a switched fabric-based I/O architecture ... represents the industry's choice for developing I/O technologies that will keep pace with the demands of the Internet age." Infiniband though could be the grid computing intercomnnect of choice.