While blade servers can offer tremendous benefits for the data centre, early adopters of the technology warn fellow IT implementers to plan very carefully.

"The impact on facilities wasn’t considered when blades first came out, so you have to do some serious capacity planning and architecture development before deploying them," says Brian Smith, data centre manager at The Cerner Corp.

Blades are self-contained servers that support high-density computing. Unlike their stand-alone predecessors, they share components, such as a monitor, with other blades to ease management, allow for organised cabling and smaller server footprints in the data centre.

Cerner, which hosts applications for hospitals, has been working with blade servers in its seven data centres for the past three years and has almost 1,200 in use today.

Smith says he has learned first hand the promise and perils of the technology. On the upside, blade servers allow companies to consolidate their operations and employ advanced management tools such as virtualisation. On the other hand, blade servers are notorious energy drains that wreak havoc on data centres' power and cooling resources. "Data centres can cook if they aren’t prepared for the high density," Smith says.

Blades have bigger power needs

Jeff Stein, director of professional services at InteleNet Communications, agrees. "The typical power requirement for a standard server is 120V power. The typical requirement for a blade is 208V power. Some facilities just can’t offer that," he says.

InteleNet, a managed service provider, has 500 blade servers split between its main facility in Irvine, which it owns, and another facility in Phoenix. Stein just completed "a significant power expansion project" to support the blades. "In Irvine, the original construction and electrical designs for the facility were able to deliver a certain number of watts per square foot on average. Recent hardware developments, such as the blade servers, have forced us to enhance the infrastructure of this data centre to support the increasing electrical and cooling requirements," he says.

He admits the team ran into challenges when they first deployed the blades almost two years ago.

"We run a data centre, deal with lots of power requirements and we still made an error when we bought our first chassis," he says.

Stein says the team purchased power distribution units and cabling that were much larger than anticipated. "This limited what additional equipment could be installed effectively in the same cabinet with the blades. We made sure to take note so that we never make that mistake again," he says.

Watch this space

He says another common mistake that data centre teams face when dealing with blade servers is space allocation.

"You have this perception that because the blade servers are smaller and vertically mounted, you’ll be able to put more in a rack. That’s not always true," he says.

Stein says that traditional server chassis hold one horizontal-mounted server per rack unit. With blades, the chassis tend to be seven or nine rack units and deliver 14 independent blades. However, this higher-server density also brings related increases in power and cooling requirements.

Andi Mann, analyst at Enterprise Management Associates, agrees that blade servers can be deceiving. "You can’t rack up two or three next to each other; sometimes you can’t even fill up a whole rack," he says.

He encourages data centre teams to plot out their equipment needs. "You need tools to help you understand your hot spots and where you need to run power. Remember, you’re co-locating a lot more power drain into a single circuit, and you need to ensure you aren’t overloading the system."

Planning, other tools for the job

He suggests pre-empting problems by implementing dedicated power and space planning programs such as Visual Network Design’s Rackwise and Aperture Technologies' Vista. He adds that applications such as HP's Insight Power Manager track ongoing consumption.

John Rowell, chief technology officer at OpSource, says not planning ahead of time leads to cost issues down the road. "For larger server deployments, you really have to become a power expert, otherwise you’ll get burned on costs," he says.

OpSource, a software-as-a-service provider, expanded its data centres during 2005 and 2006, increasing the pool of blade servers to more than 850 and sending the power demand through the roof. Coupled with the rising price of power during that time, he says costs increased by more than two and a half times what they started with.

Rowell says that because they had agreements in place with customers before that time, they were unable to pass the costs along to customers. "We’ve had to soak up a lot of those fees," he says.

Although OpSource is a service provider, in-house IT staffers might keep this situation in mind especially if they engage in chargeback or other budgeting practices that require user departments to pay for the IT resources they consume.

Rowell says there are two primary drivers for a move to blades: the number of servers required to support today’s applications, and the increase in CPU and memory needed to support those applications. "Faster processors and larger memory chips that come in these servers need more power to run. This combination has created a multiplier effect on the power requirements of data centre deployments," he says.

To ensure that they are on target when purchasing equipment, Rowell says his team uses software tools to do a CPU/memory to watts analysis. "It typically requires three times the server CPU/memory capabilities to run an application today than was required in 2001," he explains.

Cerner's Smith says there are other considerations with blades, too, such as rack size. "Depending on how many chassis you put in a rack, they are getting taller. If you don’t plan for it, the doors into the rooms might not be tall enough. We’ve had to replace some doors," he says. The height also poses a problem for cabling. "We do our cable management overhead to make sure we have enough room," he says.

There are some Band-Aid measures that companies can put in place to ease blade servers' power and cooling burden on the data centre. "You can leave blank floor tiles around the racks to get cold air in; you can get a back door that sends heat out of the room; and you can bring water into the data centre to cool it. There are lots of work-arounds," Smith says.

But he warns, "All that can add up.

So the pluses of using blade servers can get outweighed by the cost of dealing with the high power and cooling needs."

Still, blades are worth the hassle

Although some ITers are quick to point out the costs and other issues inherent with blade servers, they are equally adamant about never going back to stand-alone servers.

Rowell says he wouldn’t give up the strong management tools for his Linux and Microsoft environment. "One of the primary reasons we went with blades is for the virtualisation tools," he says.

His team has striped multiple instances of software across a host of blades so that when a customer has an event, such as the launch of a new product, they can easily ramp up server capacity to support the traffic surge.

Consultant Mann says virtualisation is also beneficial for companies going through mergers and acquisitions. "If your company all of a sudden buys another company, it’s easy to rack up a whole lot of new blades and deploy a virtualised environment to your new employees," he says.

He adds that blades ease management within and between data centres. "Managing dozens of blades is simple. You no longer have to do swivel-chair administration because you can manage them all -- even remote sites -- from a single console," Mann says.

This ease of management allows IT groups to redeploy staff away from tedious server administration tasks, he adds.

Cerner's Smith says the key to balancing the pros and cons of the blade servers is to stay on top of your data centre needs and not be caught off guard. One way he does this: "Our IT team meets with the facilities team every week to make sure everything is running smoothly. We have a list we run through -- are we running out of power, space, cooling?" he says.

For InteleNet's Stein, blade servers have been a godsend. "It’s worth it for us to make any modifications for our blade servers, because we don’t have the headaches we used to have such as unracking servers and tearing them apart to reconfigure them. All we have to do is take out a blade, upgrade it and stick it back in," he says.

Gittlen is a freelance writer based in greater Boston and the author of Computerworld’s "Networking Know-How" column. She can be reached at [email protected]