This is part one of a two-part article. Part two can be found here.

Datacentre design is undergoing a significant transformation. The fundamentals of the datacentre -- servers, cooling systems, UPSs -- remain the same, but their implementations are rapidly changing, thanks in large part to the one variable cost in the server room: energy.

Still in its infancy, though growing up fast, server virtualisation is increasingly being relied on as a power-saving outlet for enterprises rolling out cost-effective datacentres or retro-fitting existing datacentres to cut power costs considerably. What may come as a surprise, however, is that hidden energy costs await those who do not plan the layout of their virtualised datacentre wisely. And the chief culprit is heat.

Consolidating the workload of a dozen 1kW servers onto one 2kW machine means that most virtualisation hardware platforms produce more heat per rack unit than individual servers do. Moreover, collecting several virtualised servers into a single, high-density rack can create a datacentre hot spot, causing the rack and adjacent ones to run at significantly higher temperatures than the rest of the room, even when the room is centrally cooled to 68 degrees.

Blade servers are notorious for this because they run extremely heavy power supplies and tend to move an enormous amount of air through the chassis. Virtualising them will indeed significantly reduce datacentre energy costs, but it won't provide a complete solution for reining in your datacentre's energy needs. For that, you have to retrofit your thinking about cooling.

Cooling on demand

For the most part, big, beefy air conditioning units that push air through dropped ceilings or raised floors remain regular fixtures in the datacentre.

But for enterprises looking for energy efficiency or seeking to retrofit for added energy relief, localised cooling -- mainly in the form of in-row cooling systems -- is making a splash.

"We originally designed our in-row cooling solutions to address hot spots in the datacentre, specifically for blade servers. But it's grown far beyond that," says Robert Bunger, director of business development for North America at American Power Conversion (APC). "They've turned out to be very efficient, due to their proximity to the heat loads."

Bucking the "big air conditioner" paradigm, in-row cooling systems such as APCs are finding their place between racks, pumping out cold air through the front and pulling in hot air from the back. Because cooling is performed by units just inches away from the source rather than indiscriminately through the floor or ceiling, datacentre hot spots run cooler.

What's more, rather than relying on a central thermostat, these units function autonomously, tapping temperature-monitoring leads placed directly in front of a heat source to ensure that the air remains within a specified temperature range. If a blade chassis starts running hot because of an increased load, the in-row unit ramps up its airflow, dropping the air temperature to compensate.

Moreover, the unit ratchets down its cooling activities during idle times, saving even more money. All told, the cost-cutting benefits of localised cooling are quickly proving convincing, so much so that Gartner predicts in-rack and in-row cooling will become the predominant cooling method for the datacentre by 2011.

This is part one of a two-part article. Part two can be found here.