Can the server take on the humble characteristics of a razor blade? Judging by the comments of proponents of ‘blade’ servers, the analogy is appropriate and persuasive. Why not reduce server technology to a motherboard that can be hot-swapped as and when needed, while introducing simpler management through shared use of cabling, power, cooling, storage and so on? Blades follow the general trend in server architecture of moving from large to small elements and from centralised to distributed resources. They give certain aspects of the server – such as memory, storage, networking, management, and Input/Output elements – their independence; while at the same time re-establishing the importance of the frame or chassis as distinct from the elements inside.

Blades are typically Intel-based processor boards, designed to be added incrementally to a rack-mounted chassis. They take up less space and consume less power than existing options. Blades typically support Linux, Windows, Netware, and in the case of Sun Microsystems, Solaris running on Sparc and x86 processors.

Startups like RLX Technologies and Egenera kick-started the market three years ago which was then firmly established last year with the arrival of HP/Compaq, Dell, IBM, and Sun. IBM, it is worth noting, is in joint development with Intel to produce ‘standard’ blades using IBM chipsets. Meanwhile Intel says it will deliver its own products next year which will be rebranded by systems integrators.

In terms of architecture blades are benefiting from a convergence of developments in computer architectures. Clustering using high speed links; interconnects, chips, board-level technology, and management and provisioning software.

The core of the blade is the processor technology and Intel seems a clear winner at this stage, especially bearing in mind that the early buzz created by RLX technologies was based on its use of low-power mobile Transmeta (Crusoe) chips. In 2001 RLX unveiled a system that could fit more than 300 servers in a rack that would normally have housed 42 rack servers. Transmeta forced Intel to react and in November 2001 it delivered the ‘Tualatin’ Pentium III. Perhaps to keep Intel on its toes, leading vendors display a varied approach to future development. IBM promises a PowerPC based blade and Sun plans to use AMD Athlon XP processors, for example.

No standard
But a standard blade architecture is some way off. HP laid down a gameplan when it adopted CompactPCI for its Powerbar blades. But critics say CompactPCI is old and expensive technology, saying that the parallel bus structure is giving way to high-performance switched fabric architectures and that cPCI was targeted at telecoms and industrial applications instead of traditional data center computing. The upshot of this debate is that different vendor’s blades are made up of different connectors and signal routing and customers are not able to mix blades from different vendors.

For many Infiniband is the technology designed to replace cPCI because it provides a universal interconnect fabric that is ideal for situations where space is limited and multiple fabrics are not possible. The Infiniband Trade Association includes Compaq, Dell, HP, IBM, Intel, Microsoft and Sun.

There are no clear winners in blade architecture yet. But there is a clear trend toward sacrificing low power and high densities in favour of more powerful blades. This is most evident in the dual and quad processor blades designed for full scale applications shipping or about to ship from HP, RLX, IBM and Egenera.

Vendors are keen not to miss any blade bandwagon but are also wary of cannibalising existing product lines. This is perhaps why only the startups like Egenera are pitching blades firmly up against large Unix boxes.

Janet Day, IT Director at law firm Berwin Leighton says the firm has been using HP blade servers for two years: “As a cheap way of doing a lot of low-level server work they are pretty useable and worthwhile. Once you have worked out how to install them they are pretty easy to manage as well. The current downside is the processing power but we are watching out for improvements on that front. My advice to others is if you can see a suitable application then it is likely that you can get the space and management benefits they offer.”

Meanwhile some users remain concerned about cooling requirements. Adam Newby, IT manager, of NewsNow Publishing says: “For applications which benefit from parallelisation (simulations, rendering, etc), blades look like a great idea because of the high density and lower cost. But my main worry would be about cooling - in our experience, even today's standard 1U Intel-based servers run pretty hot, so cramming a bunch of 3GHz P4 Xeons into a tight space will generate a lot of heat.”