Datacentre design is undergoing a significant transformation. The fundamentals of the datacentre -- servers, cooling systems, UPSes -- remain the same, but their implementations are rapidly changing, thanks in large part to the one variable cost in the server room: energy.

Still in its infancy, though growing up fast, server virtualisation is increasingly being relied on as a power-saving outlet for enterprises rolling out cost-effective data centres or retrofitting existing datacentres to cut power costs considerably. What may come as a surprise, however, is that hidden energy costs await those who do not plan the layout of their virtualised datacentre wisely. And the chief culprit is heat.

Consolidating the workload of a dozen 1kW servers onto one 2kW machine means that most virtualisation hardware platforms produce more heat per rack unit than individual servers do. Moreover, collecting several virtualised servers into a single, high-density rack can create a datacentre hotspot, causing it and adjacent racks to run at significantly higher temperatures than the rest of the room, even when the room is centrally cooled to 68 degrees.

Blade servers are notorious for this because they run extremely heavy power supplies and tend to move an enormous amount of air through the chassis. Virtualising them will indeed significantly reduce datacentre energy costs, but it won't provide a complete solution for reining in your datacentre's energy needs. For that you have to retrofit your thinking about cooling.

Cooling on demand

For the most part, big beefy air conditioning units that push air through drop ceilings or raised floors remain regular fixtures in the datacentre, but for enterprises building out for energy efficiency or seeking to retrofit for added energy relief, localised cooling -- mainly in the form of in-row cooling systems -- is making a splash.

"We originally designed our in-row cooling solutions to address hotspots in the datacentre, specifically for blade servers, but it's grown far beyond that," says Robert Bunger, director of business development for North America at American Power Conversion (APC). "They've turned out to be very efficient, due to their proximity to the heat loads."

Bucking the "big air conditioner" paradigm, in-row cooling systems such as APC's are finding their place between racks, pumping out cold air through the front and pulling in hot air from the back. Because cooling is performed by units just inches away from the source rather than indiscriminately through the floor or ceiling, datacentre hotspots run less hot. What's more, rather than relying on a central thermostat, these units function autonomously, tapping temperature-monitoring leads placed directly in front of a heat source to ensure that the air remains within a specified temperature range. If a blade chassis starts running hot due to increased load, the in-row unit ramps up its airflow, dropping the air temperature to compensate.

Moreover, the unit ratchets down its cooling activities during idle times, saving even more money. All told, the cost-cutting benefits of localised cooling are quickly proving convincing, so much so that Gartner predicts in-rack and in-row cooling will become the predominant cooling method for the datacentre by 2011.

Modular air conditioning

For enterprises considering localised cooling, APC's in-row units are available in both air- and water-cooled models that provide from 8kW to 80kW of cooling output. The smaller APC units -- the ACRC100 and the ACSC100 -- are the same height and depth of a standard 42U rack, but half the width. The company's larger ACRP series retains the full 42U-rack form factor but pushes out far more air than the smaller units do.

Power and cooling giant Leibert is another vendor offering localised cooling solutions. Its XD series in-row and spot-cooling systems are similar in form and function to their APC counterparts. Leibert also offers units that mount on top of server racks, drawing hot air up and out. Both APC and Leibert have rear-mounted rack ventilation and cooling units that exhaust hot air into the plenum or cool the air before passing it back into the room.

The modularity of these systems translates to significant startup savings. Whereas whole-room solutions must be sized for anticipated growth, localised cooling units can be deployed as needed. A large room that starts out only 30 percent utilised will require only 30 percent of projected full-room cooling hardware upon initial deployment.

There are downsides to these units, to be sure. The water-cooled systems require much more piping than centralised units do, and water pipes must be within the ceiling or floor of the room. The air-cooled units can place large heat loads into the plenum above the datacentre, resulting in airflow and heat exhaust problems. Moreover, because these solutions are built to provide just enough just-in-time cooling, the failure of a single unit can be taxing. Either way, whether you're rolling out a new energy-efficient datacentre or retrofitting one already in place, a comprehensive understanding of your building's environmental systems and the expected heat load of the datacentre itself is required before implementing any localised cooling solutions.

Cool to the core

For some enterprises, individual high-load servers bring the kind of heat worthy of a more granular approach to cooling. For such instances, several vendors are making waves with solutions that bring a chill even closer than nearby racks: in-chassis cooling.

SprayCool's M-Cool is a water-cooling solution that captures heat directly from the CPUs and directs it through a cooling system built into the rack. The heat is then pushed through a water loop to completely remove the heat from both rack and room. Cooligy is another vendor offering a similar in-chassis water-cooling solution. SprayCool's G Series takes the direct approach to cooling a step further, functioning like a car wash for blade chassis, spraying nonconductive cooling liquid through the server to reduce heat load.

Enterprises intrigued by in-chassis cooling should keep in mind that these solutions are necessarily more involved than whole-room or in-row cooling units and have very specific server compatibility guidelines.

The high-voltage switch

Virtualisation and improved cooling efficiency are not the only ways to bring down the energy bill. One of the latest trends in data centre power reduction -- at least here in the States -- is to use 208-volt power rather than the traditional 120-volt power source.

When the United States rolled out the first electrical grid, light bulb filaments were quite fragile and burned out fast on 220-volt lines. Dropping the voltage to 110/120 volts increased filament life -- thus, the U.S. standard of 120 volts. By the time Europe and the rest of the world built out their power grids, advances in filament design had largely eliminated the high-voltage problem, hence the 208/220-volt power systems across most of the rest of the globe.

What's important to note is that each time voltage is stepped down, a transformer is used, and power is lost. The loss may be as little as 1 percent or 2 percent per transformer, but over time and across a large datacentre, the penalty for transformer use adds up. By switching to a 208-volt system, you need one less transformer in the chain, thereby reducing wasted energy. Moreover, 208/220-volt systems are safer and more efficient; more current is required to push the same wattage through 120 volts than 208/220, increasing the risk of injury and losing additional power in transit.

For those considering capitalising on the switch, rest assured that nearly all server, router, and switch power supplies can handle 120- or 208-volt power and most are auto-switching, meaning no modifications are necessary to transfer that gear to 208 volts. Of course, the benefits of 208-volt power in the datacentre are not the kind to cause a sea change. But as energy costs continue to rise, the switch to 208 volts will become increasingly attractive.

Retrofit for advantage

When it comes to budgeting for the datacentre, most line items can be forecasted. Determining the cost of hardware to build and maintain the room is relatively easy. The costs of providing power to all those systems tends to sway in the breeze, however, and even a small jump in the unit price of power can put a big mark on an otherwise pristine balance sheet.

And where there is variable cost, there is the potential for competitive advantage.

Virtualised servers, localised cooling solutions, and cost-conscious means of delivering power to the server room are changing the underlying principle of datacentre design to a search for greater energy efficiency. The killing-flies-with-a-shotgun approach to cooling and powering the datacentre has been banished to the history books along with the 85-cent gallon of gas. Retrofitting existing datacentres is never easy or inexpensive, but in this case, the benefits are immediate.