Gigabit Ethernet is a wonderful thing. It brings vast bandwidth at a low cost, and ensures that the network connections into the corporate servers are no longer the bottleneck. This raises a query, though: if you shovel data into a server at up to a billion bits per second and the network isn't the bottleneck, what is?
The answer is obvious, if you think about it. The vast majority of traffic on the network is IP these days. So, the fly in the ointment is the software handling the decoding of the IP packets as they come into the server. It's a fact of life that software is much slower than hardware. It's also a fact that even at 100Mbit/s, you can't route IP packets at wire speed in software. So we're faced with a dilemma – what's the point of having Gigabit Ethernet throwing packets into the server at a billion bits per second if the server slows everything down?
What can we speed up?
The server, of course, has to process all the IP packets that come into it in software. It has to unwrap the contents and deal with them appropriately. There's nothing we can do about that, except perhaps to equip the server with an extra processor, or two, to bump up its power and reduce the proportion of the CPU being used for IP driver processing. But what if there were some functions of IP packet processing that we could farm off to some other piece of kit?
TCP/IP has a bunch of peripheral functions that driver software has to deal with before the packets can be used by any higher-level software. The nature of TCP/IP is that packets can conceivably be delivered out of order, so they have to be re-ordered before the contents make any sense. Then there's error checking – the packets have checksums that have to be verified before a packet can be accepted. Then, of course, there's all the other stuff related to sessions; because it's a guaranteed-delivery protocol, receipt of every incoming packet has to be acknowledged by the driver so the sender knows it arrived OK.
The trend is, therefore, to plonk some hardware on the Ethernet card that handles all of this stuff before the packets even get near the computer's CPU and the operating system's IP driver software. The concept has been termed a "TCP/IP Offload Engine", or TOE. The idea is that some custom hardware on the network adaptor deals with these overhead functions as the data comes in and passes to the IP driver a set of packets which have a number of guarantees (they'll be in the right order, the delivery acknowledgements will have been sent already and so on). The card's driver software will, of course, have been written such that the computer's own IP driver will no longer try to do these now, unneeded functions. This means that the IP software can be more streamlined since it can assume packets will be in the right order, that they will have valid checksums, etc. So, it will work faster, use less memory, and be easier for the driver writer to develop because he or she won't have to muck about with checksum calculations or storing out-of-order packets.
At present, TOE implementations are proprietary, as there's no standard yet established (though as with traditional Ethernet cards vendors, such as Adaptec, have produced TOE-capable Ethernet chipsets that other network adaptor and motherboard vendors can OEM and so we'll find the same vendors' chipsets cropping up again and again). The lack of standards isn't a problem in principle (after all, all Ethernet cards use proprietary hardware and firmware – the reason we have driver software is to interface the proprietary stuff to the operating system's standard function calls), though it does mean that the Open Source world will lag a little behind the Windows world until the driver writers have caught up.
In practice, though, using TOE is little different from using a standard network adaptor. Just as you would with your traditional old Ethernet card, you stuff in the card, install the driver, and it works.
Find your next job with techworld jobs