For some time now, 10 Gigabit Ethernet has looked like a solution that's seeking a problem, and now it may just have found one - server virtualisation.

Yes, virtual servers could be the thing that finally pushes your organisation into buying 10G cards and networks, according to those involved on both ends of the cable. The reason, they say, is quite simply that current NICs cannot keep out with the demands of multiple servers.

In the early days of virtual machines (VMs), people used them as a way of dual-booting - for example, to run two different operating systems on the same PC. The problem came as virtualisation went mainstream, and the number of VMs per system jumped up, says George Zimmerman, the CTO of 10G chip developer SolarFlare.

"You could put in multiple NICs and attach each VM to a particular NIC. That's keeps the resource separate but it's ugly - and you broke the VM model, as your VMs aren't hardware-independent now," he adds.

"Or you could use the guest operating system to cut through to the NIC, but then the guest OS has to be virtualisation-aware which means you have to modify your software, which you don't want to do."

He adds that the growing popularity of networked storage means that each VM needs yet more bandwidth. "With iSCSI and FCoE, you need more than one Gigabit," he notes.

The problem is getting worse with the eight and 16-way servers that are now reaching the mainstream, says Leonid Grossman, software engineering VP and co-founder at 10G NIC developer Neterion.

"10 Gigabit is not necessary for four-way systems, you can use a four-port Gigabit card for those," he says. "What's driving 10G is the arrival of four-core CPUs which enable 16-way systems. There could be eight VMs per core, so that's 128 servers on one machine."

Part of the attraction of 10G is simply consolidation - it means a single fibre or cable in place of several, says VMware product manager Jacob Jansen.

"The density of VMs is expected to rise not only because of core counts, but also because of the nature of the VM - for example, desktop virtualisation, or VDI," he says. "Desktops don't need many CPU cycles so you can get even more VMs per core.

"But then your boxes need a huge number of Gigabit network ports, so it becomes a cabling nightmare and eventually consolidation becomes worthwhile from that point of view."

As well as the need for bandwidth to support applications and storage, there are also virtualisation-specific needs, such as using VMotion to move an entire VM from one physical server to another.

Some server developers have responded to the rise of virtualisation and the availability of 10G Ethernet by designing servers with fewer PCI slots and gearing their machines towards memory and CPU, instead of I/O, says Richard Brunner, VMware's senior I/O architect.

The Ethernet overhead

However, simply dropping in a 10G NIC is not going to solve the bandwidth problem on its own, he adds, because a virtualised server still has to manage the I/O flow to the various VMs. That's not too bad with just a few VMs at Gigabit speeds, but gets painful with dozens or more running at 10G.

"The problem is the way we virtualise network devices," he admits. "It means the hardware has to play traffic cop, classifying packets for the various VMs." In effect, the hypervisor could find itself simulating a 128-port 10G Ethernet switch.

The solution, he says, is to add I/O virtualisation support to the NIC itself. It means the NIC taking care of work such as MAC and VLAN classification, and routing packets to separate queues.

Standards are now emerging for I/O virtualisation (IOV) from the PCI SIG, the body responsible for defining the PCI-Express format that most NICs now follow. It has come up with specs for it both on stand-alone servers (single root I/O) and on devices such as bladeservers, where several servers share a single NIC (multi-root I/O).

NICs that support I/O virtualisation already exist - for example, Neterion and IBM demonstrated one working with VMware ESX on an IBM System-x server at the recent VMworld exhibition in the US, while SolarFlare has tested a prototype vNIC with XenEnterprise on a Dell server.

Several other companies are also developing IOV chips and NICs, including Denali, Intel, NetXen and NextIO.

Eventually, the PCI IOV specs should mean that any compliant NIC will be able to accelerate any compliant virtualisation software. Support for Microsoft's Hyper-V (formerly Viridian), VMware and Xen should be expected as the bare minimum.

Zimmerman adds that the beauty of running a machine with virtualised I/O is that you can run your old OS and application on it, and the hypervisor goes back to being what it ought to be - "shimware", as he puts it.

He concludes: "By 2010, even mid and low-end servers will have 10G I/O with virtualisation and acceleration built-in - it will be mainstream."