Boosting throughput inside systems has become a priority in entry-level servers, and progress is evident in system designs from AMD, Apple, and Intel. But maximising throughput between servers -- and between servers and peripherals -- has been considered less critical.

Extremely high-speed external interconnects -- the kinds that make Fibre Channel and 10 Gigabit Ethernet seem like smoke signals -- do exist, but they suffer from several problems. They're expensive and poorly supported by applications and operating systems. But the biggest barrier is that IT mostly doesn't see the need. Isn't it wasteful to ship data around your server room faster than any of your equipment can consume it?

Nobody can convince me that bandwidth is ever wasted. Whether by accident or by design, telephone companies and cable operators built huge amounts of unused bandwidth into their networks. Across the country, you'll find plenty of areas where the penny-pinchers won out and scaled the telephone or cable network to deliver only the most basic services. That was a dopey move.

In my estimation, the only reason to have a land line is to run DSL, and the only reason to have cable is to get cable modem service. I can't see the point of creating an interconnected fabric of servers and storage that's limited to the roles assumed by the equipment and applications already in place. If you're going to lay new cable in your data centre, I say skip this order-of-magnitude business -- Ethernet's 10 to 100 to 1,000 to 10,000 megabits per second -- and go for the gusto.

For my money, that's InfiniBand. A four-conductor cable throws data at 2.5Gbit/sec in each direction -- send and receive. An eight-conductor cable gets you 10Gbit/sec in each direction. And the 24-conductor InfiniBand cable tops out at 30Gbit/sec each way. The Centronics printer cable has 25 conductors, and I'm pretty sure it falls a tad shy of the 30Gbit/sec mark. Fibre-optic is also a cabling option, but copper is cheaper.

We're waiting for the day when external interconnects come close to the bandwidth of internal components. Alas, InfiniBand doesn't bring this fantasy to life; one Opteron HyperTransport system bus link has more than twice the bandwidth of a 24-conductor InfiniBand interconnect. But turn it around -- a 24-conductor InfiniBand external interconnect has nearly half the potential bandwidth of an internal HyperTransport channel -- and it sounds rather promising. You could conceivably get two systems talking at half the speed that two Opteron CPUs can communicate within the same box.

"Conceivably" being the operative word; it might be a little early to be laying down InfiniBand cable or fibre. But InfiniBand's design meshes so well with grid computing that it's hard for me to imagine grids without InfiniBand. Each adapter implements a transactional message-passing scheme with 16 (15 usable) virtual I/O lanes and the capacity for more than 200 operating send/receive channels.

InfiniBand is part of that convergence of high-performance computing and business computing that I've long predicted. Opteron, Xserve, and now IBM's Power5 deliver on HPC (high-performance computing) concepts inside the box. Now it's time to focus on interconnects in general -- and on InfiniBand in particular.