As high-speed interconnects and virtualisation software make their way into enterprise data centres, IT managers need to take a close look at how their servers are keeping pace.

Experts say that in many cases corporate users will find that servers are not taking full advantage of new technologies because of limitations in the system PCI and PCI-X bus, the I/O slot where servers hook into peripherals such as network and storage devices. The parallel architecture of today's PCI technology limits the bandwidth and throughput available to move data in and out of servers.

Intel, Dell, HP, IBM, Microsoft and others have been working on a serial I/O technology called PCI-Express. It's designed to keep up with advances in interconnect bandwidth and speed by enabling data to shuttle more quickly in and out of servers. As a result, users also will be able to make better use of new architectures such as multi-core chips, in which more than one core resides on a single die; and virtualisation, in which multiple virtual servers reside and run on a single physical machine. The new server designs will have more processing power, and there needs to be a gateway that can handle larger volumes of data that moves more quickly.

"We're at the point where we can't take advantage of things like faster processors or faster graphics cards, or keep up with some of the [storage-area networks] and [network-attached storage] capabilities," says Vernon Turner, group vice president and general manager of enterprise computing at IDC. "For example, right now you could have iSCSI drives using 10Gig Ethernet and, if you're using PCI-X, you're the bottleneck. With PCI-Express, you finally have something that gets you to the speed of the I/O devices outside the box being fed by something that's fast enough inside the box."

Systems vendors including Dell, HP and IBM began shipping servers with PCI-Express slots this fall. Analysts say there are no real competitors to the PCI-Express technology because efforts to enhance the parallel PCI standard have been discarded.

"It's hard to imagine a vendor not on the PCI-Express bandwagon," says Gordon Haff, an analyst at Illuminata. "The only question is how quickly you need to make the transition."

Dell, which co-founded the PCI-Express IT Network with Intel last year, is a leading supporter. Dell executives say PCI-Express will help make their vision of a "scale-out" data centre a reality. Already, users are clustering standards-based x86 servers to run applications that previously relied on big symmetric multiprocessing systems.

"The next wave is how are you looking at SAP, Oracle, SQL Server - those kinds of applications. Today those are done typically on four-way, eight-way and 16-way systems," says Jose Tormo, director of business planning at Dell. "What we're finding with architectures like PCI-Express is we're enabling clusters of two-ways and four-ways to go displace bigger systems."

The new I/O architecture of PCI-Express uses serial links similar to those in Gigabit Ethernet and Fibre Channel to move data in and out of servers.

The original PCI standard uses a parallel shared-bus architecture in which chunks of data move side by side. The trouble is that signals have to be co-ordinated, and as you increase bandwidth by adding more signal paths and increase speed by upping signal frequency, it becomes increasingly difficult -- and expensive -- to keep those signals synchronised. With traditional PCI and PCI-X, each I/O device must share the single bus.

But PCI-Express gives each device its own bus, called a link. Each link is made up of two lanes, one for receiving and one for transmitting data, and each operating at 2.5GHz. PCI-Express can increase throughput by adding links and today PCI-Express is available in one-, four-, eight- and 16-link configurations, says Jim Pappas, director of enterprise initiatives at Intel.

That gives servers throughput capabilities of up to 80Gbit/sec with a 16-link PCI-Express slot. By contrast, PCI-X in its fastest configuration moves 32Gbit/sec.

Being able to move data quickly and economically is important as faster interconnects such as InfiniBand and 10Gig Ethernet become more widespread, and as servers become more powerful with multi-core chips and virtualisation capabilities.

Formerly 3GIO (third generation I/O), the technology was renamed PCI-Express when it was introduced in 2002. Intel released its first server chipset supporting PCI-Express in July, which let systems vendors ship boxes with PCI-Express slots during the last few months.

Dell, for example, includes a PCI-Express slot on several servers, including its new PowerEdge 1855 blade server. IBM includes PCI-Express slots on all new dual-processor Xeon boxes. And HP has added PCI-Express support to its Xeon servers.

Boxes with PCI-Express slots also support PCI and PCI-X to ensure that users don't have to throw out legacy cards. PCI-Express also is backward-compatible with most software, meaning that drivers do not have to be rewritten.

PCI-Express initially has made inroads in industries where users need more I/O power for graphics design, video and gaming applications. Analysts say the technology will start to take root in the enterprise data centre over the next year or so to improve server I/O performance for storage and clustering where low latency and high-speed connections are important.