"The blade will become the dominant server architecture in the next two to four years - it's easier to manage, takes less space and has better cooling," says Douglas J Erwin, chairman and CEO of pioneering blade developer RLX.

"It's like it was with rack servers versus towers a few years ago," he adds. "There's still people buying towers of course, but we don't have to evangelise blades today. Two years from now it'll be general purpose for all applications."

RLX first hit the headlines three years ago, when it took low-power processors developed by Transmeta for mobile computing and adapted them for use in high density servers, where power consumption and the consequent heat generation is a big challenge.

Software is the key
Since then, a freshly-galvanised Intel has largely caught up on the power front and RLX has moved over to the mainstream. However, Erwin says that as blade computing has evolved, the software has become much more significant than the choice of processor.

"We're now on the full line of Intel processors and will have both 32 and 64-bit blades. We're on our fifth generation of blades now, so we've had a lot of practice, and we're working on our sixth generation for release in the first half of next year," he says.

"We were the first to announce Infiniband on a blade, and we have a dedicated network for management, otherwise you impact the workload. But we've realised though that we can never be more than six months ahead of IBM on hardware, so we differentiate on software.

"Our software has really put us on the map. We have evolved it to be modular - you can buy it cafeteria-style, depending on usage - and have won several bake-offs against IBM and HP because of the way our software works and what it can do."

RLX recently announced a new version of its blade management software, called Control Tower 6G, which enables users to remotely manage and provision blade servers as one pool of resources.
Erwin says that the key is that the software can provision, install, image, and even to a degree patch servers en masse. It can make some changes automatically too, notifying admin staff of a problem and then again when it has fixed the problem for them.

Capacity on demand
"Customers want capacity on demand, but blades today are really still servers, just smaller and better managed," he adds. "The next generation of blades will be more compute-centric, so customers will be able to dial up more compute power. It will have even smaller compute units to provision through a low-latency interconnect - the concept is similar to multitasking but the execution is very different."

Of course, RLX is not alone in recognising these needs, and other vendors are moving in a similar direction. For example, HP has announced plans to integrate VMware's virtual machine software into its BL20p blade servers to increase server utilisation and simplify deployment.

And that while many of today's blades are non-standard, future models from the likes of Dell, HP and IBM should comply with a range of standards, just as Erwin says RLX blades do today. These include IPMI, PXE for remote booting, and SNMP.

"The next step is for us to walk into a heterogeneous environment and manage IBM and HP blades, because they use the same standards, as do some Dell servers," Erwin adds. "Going forward, they will all comply to standards, especially IPMI, and once you comply to standards I don't care what you look like."

The adoption of blade technology is also changing the way systems are built. At a physical level, a blade is still a server, but as you remove parts that are needed in a pizza-box but are redundant here, issues such as cooling become even more pressing, and there is more scope for system components to be modular and swappable.

Erwin argues that the real changes in thinking and architecture are required at the management and applications levels. For example, he says that a typical RLX environment has 800 servers with four or five spares to cover failures, but managed as a pool, regardless of physical location.

"You could even fail from Chicago to a spare in London, or run without spares and move blades from secondary applications to critical ones as needed," he explains.

Evolving a compute pool
And he says that as blades evolve, software must turn them into a pool of finely-ground compute resource, just as software can virtualise disks on a SAN into a pool of SCSI blocks.

"Customers want standard building blocks, to put them in and use software as the glue to build systems," he says. "The applications that needed the density and could put users through that were mostly edge applications, but now you're seeing it for mainstream applications too.

"Today you add instances of the same application to share the user load. Can we dynamically allocate processor power? No, because there's no applications or middleware yet written to do that, only specialist technical applications can do it, for example, or applications from the old fault-tolerant world."

This middleware is coming though, along with the necessary operating system support - and Erwin notes here that while it's still 70 percent Linux for RLX, the market for Windows on blades is growing.

"We haven't seen any changes in operating system architecture for 15 years because it was all scale-out," he says. "Now some Linux flavours have the opportunity to access discrete elements, and the operating system is then the job scheduler and fail-over manager.

"Notice the slight changes in how Windows 2003 works, too. Windows never really addressed scale-out or understood clustering, it was just fail-over or redundancy. Now that's changing."