The PCI Express bus has emerged as an efficient and cost-effective platform for network applications.

Created to address the performance, scalability and configuration limitations of older parallel computer bus architectures, this general-purpose serial I/O interconnect has been widely adopted in enterprise, desktop, mobile, communications and embedded applications.

Despite its widespread deployment, however, there is a common perception that the bus cannot meet the unique I/O demands of high-performance storage and networking. New work on extensions to the PCIe standard is revising that notion.

The PCI-SIG Working Group is developing a specification that adds I/O virtualisation capability to PCIe. This functionality lets network administrators virtualise or share peripherals and endpoints across different CPUs or CPU complexes.

Base PCIe topologies have dedicated endpoints mapped to specific root complexes. In this environment, each physical endpoint in the network is associated with one system image and cannot be shared.

In the new specification, root complex topologies provide two levels of I/O virtualisation. In the first level, called single-root I/O virtualisation (IOV), the virtualisation capability is provided by the physical endpoint itself.

The endpoint supports one or more virtual endpoints, and mechanisms are used to enable each virtual endpoint to directly sink I/O and memory operations from various system images, and source direct memory access, completion and interrupt operations to a system image without run-time intervention.

In the second level, called multiroot IOV, the virtualisation capability is extended by the use of a multiroot switch and a multiroot endpoint. These switches and endpoints have mechanisms to let multiple root complexes and system images share common endpoints.

I/O virtualisation has a number of benefits. First, it can be used to improve system use. While each virtual system requires its own dedicated I/O resources, in many physical configurations the number of I/O slots available on a client or servers may be insufficient to provide each virtual system with its own dedicated I/O endpoint.

Even when an adequate number of physical I/O endpoints is available, this topology lets virtual systems share underused endpoints.

Moreover, the use of a centrally managed I/O resource improves the scalability of I/O while simplifying the management of the network. Both blade and rack-mount servers can access the resources they need, when they need them. And, because I/O can be managed from a centralised switch, administrators can allocate resources more easily and efficiently.

The centralised approach to I/O virtualisation also offers network administrators a new opportunity to maximise network I/O load balancing and bandwidth management. If a virtual system needs additional bandwidth, for example, network managers can allocate more physical endpoint capacity. And if a virtual system consumes more I/O resources than necessary, its consumption can be reduced to a pre-set level.

Finally, I/O virtualisation promises to pay dividends in higher network reliability. By eliminating redundancy in peripherals and ports across the network infrastructure and reducing the number of components in the network, failure rates will drop.

And because network administrators can better match I/O resources to performance needs and thereby use fewer cards, cables and ports, I/O virtualisation also promises to dramatically reduce network costs.

Many in the server and storage industry have viewed PCIe as a bridging or transitional technology. The addition of I/O virtualisation capabilities to PCIe will alter those views. By combining this new ability to share peripherals and endpoints across multi-CPU configurations, with the intrinsic cost benefits of the bus's extensive installed base and supporting ecosystem, PCIe is an attractive option.

Zack Mihalis is a director at IDT (Integrated Device Technology).