This is part two of a two-part article. Part one can be found here.

Modular air conditioning

For enterprises considering localised cooling, APC's in-row units are available in both air- and water-cooled models that provide from 8kW to 80 kW of cooling output. The smaller APC units -- the ACRC100 and the ACSC100 -- are the same height and depth of a standard 42U rack, but half the width. The company's larger ACRP series retains the full 42U-rack form factor but pushes out far more air than the smaller units do.

Liebert is another vendor offering localised cooling solutions. Its XD series in-row and spot-cooling systems are similar in form and function to their APC counterparts. Liebert also offers units that mount on top of server racks, drawing hot air up and out. Both APC and Liebert have rear-mounted rack ventilation and cooling units that exhaust hot air into the plenum or cool the air before passing it back into the room.

The modularity of these systems translates to significant start-up savings. Whereas whole-room solutions must be sized for anticipated growth, localised cooling units can be deployed as needed. A large room that starts out only 30 percent full will require only 30 percent of projected full-room cooling hardware upon initial deployment.

There are downsides to these units, to be sure. The water-cooled systems require much more piping than centralised units do, and water pipes must be within the ceiling or floor of the room. The air-cooled units can place large heat loads into the plenum above the datacentre, resulting in airflow and heat exhaust problems. Moreover, because these solutions are built to provide just enough just-in-time cooling, the failure of a single unit could be taxing.

Either way, whether you're rolling out a new energy-efficient data centre or retrofitting one already in place, a comprehensive understanding of your building's environmental systems and the expected heat load of the data centre itself is required before implementing any localised cooling solutions.

Cool to the core

For some enterprises, individual high-load servers bring the kind of heat worthy of a more granular approach to cooling. For such instances, several vendors are making waves with offerings that bring a chill even closer than nearby racks: in-chassis cooling.

SprayCool's M-Series is a water-cooling solution that captures heat directly from the CPUs and directs it through a cooling system built into the rack. The heat is then pushed through a water loop to completely remove the heat from both rack and room. Cooligy is another vendor offering a similar in-chassis water-cooling solution.

SprayCool's G-Series takes the direct approach to cooling a step further: It functions like a car wash for blade chassis, spraying nonconductive cooling liquid through the server to reduce heat load.

Enterprises intrigued by in-chassis cooling should keep in mind that these solutions are necessarily more involved than whole-room or in-row cooling units and have very specific server compatibility guidelines.

The high-voltage switch

Virtualisation and improved cooling efficiency are not the only ways to bring down the energy bill. One of the latest trends in datacentre power reduction -- at least in the US -- is to use 208-volt power rather than the traditional 120-volt power source.

When the US constructed the first electrical grid, light bulb filaments were quite fragile and burned out fast on 220-volt lines. Dropping the voltage to 110/120 volts increased filament life -- hence the US standard of 120 volts. By the time Europe and the rest of the world built their power grids, advances in filament design had largely eliminated the high-voltage problem, hence the 208/220-volt power systems across most of the rest of the globe.

What's important to note is that each time voltage is stepped down, a transformer is used, and power is lost. The loss may be as little as 1 percent or 2 percent per transformer. But over time and across a large datacentre, the penalty for transformer use adds up. By switching to a 208-volt system, you need one less transformer in the chain, thereby reducing wasted energy.

Moreover, 208/220-volt systems are safer and more efficient; more current is required to push the same wattage through 120 volts than 208/220, increasing the risk of injury and loss of additional power in transit.

For those considering capitalising on the switch, rest assured that nearly all server, router and switch power supplies can handle 120- or 208-volt power and most are auto-switching, meaning no modifications are necessary to transfer that gear to 208 volts. Of course, the benefits of 208-volt power in the datacentre are not the kind to cause a sea change. But as energy costs continue to rise, the switch to 208 volts will become increasingly attractive.

This is part two of a two-part article. Part one can be found here.