If you're thinking about using 10Gig Ethernet for more than just inter-switch links, you need to be thinking about TCP offload. So says Vik Karvat, senior marketing director at chip developer NetXen which recently introduced what it claims is the first programmable 10Gig network interface card (NIC).

But the boy has cried wolf before, when vendors claimed that iSCSI-based storage area networks would need intelligent network adapters called TOEs - TCP offload engines. It wasn't true then, so why should it be true now?

To start with it's a case of speeds and feeds, explains Karvat: "iSCSI is not a huge burden, it's the underlying TCP stack. Gigabit iSCSI and TCP is OK on the host, taking maybe 30 or 40 percent of a CPU, but as we move to 10Gig it's a different story. You could throw multicore processors at it, but your buses are getting saturated too.

"The processor vendors have acknowledged that the days of steady performance increases have ended, hence their move to multi-core chips. Also, we're talking about other protocols now, not just TCP, and those will easily saturate a host."

Power needs power
He adds that, as data centres become more and more power-hungry, there are also power inefficiencies involved.

"10Gig processing will completely saturate a dual-Xeon," he says. "Plus to do it on the CPU costs you 250W or 300W of power, versus 10W for an I/O device. Power efficiency is the real value of the offload."

Make no mistake, 10Gig port shipments are ramping fast, and 10Gig will come to your servers eventually. "10Gig is happening because people are driving the price down very aggressively," says Karvat. He quotes analyst figures of 250,000 10Gig ports shipped last year, almost all of them for inter-switch links, rising to 850,000 this year and then 1.9 million in 2007.

He adds that the increasing take-up of bladeservers is part of that. "The cost dynamics for 10Gig are much more attractive in blades, and also all will be dual-ported," he says. "It allows you to have an extra high bandwidth connection within the chassis to support clustered apps and so on - Oracle says that its sweet spot is eight-node clusters."

For example, he says that an IBM blade chassis would have 14 blades, each with two 10Gig ports, each port goes to a switch in the chassis and then you have outbound ports from there.

Even then, the arithmetic is against us - the PCI-X internal buses used in today's servers simply aren't fast enough to handle two lots of line-rate 10Gig traffic.

That's not the point though, argues Karvat: "PCI-Express won't give full throughput on both ports, but every server has two ports for failover. You need six to eight Gbit/s per pipe," he says.

10Gig as a blade bus
"One of the more costly aspects of 10Gig is that the physical layer is relatively expensive, but that is avoided in blades as there's no cables inside the chassis - our chip drives tracks on the backplane," he adds.

NetXen's answer to all of this is its Protocol Processing Engine, or PPE chip. Karvat says this not only offloads the processing load involved in handling TCP, iSCSI and the various security protocols, but it can be programmed by the NIC manufacturer or OEM too.

"It is the first programmable NIC," he continues. "It's all in firmware so they can layer additional functionality on the products, such as specific congestion management algorithms for TCP or RDMA optimisation. It can also be remotely upgraded.

"It's no longer about Layer 2 processing. As we go up the layers, the level of processing and interaction with the host increases significantly. We run the full TCP stack, RDMA and security protocols all on our chip. All applications run out of the box, we just look like a Sockets interface, and the chip handles everything."

The other important workload on the NIC is security, he adds: "A lot of security is handled in the DMZ today. As we talk about utility computing, the perimeter becomes fuzzier and the end points have to secure themselves to some extent.

"Security is all part of supporting the converged model. It's important in shared I/O where you have multiple virtual machines on single hardware, it appears as multiple NICs but the data is co-mingled on the wire, so it behoves us to layer transaction security on that. The virtual machine understands the context and security of the information - that entails a lot of processing power."