Many data centres are up against the maximum electric power available to them from their utility. Others are facing management challenges: the amount of time to deploy new capacity and to manage existing capacity and systems. And gains made by virtualising and consolidating servers are often lost again as more gear is added in.

The demand for more CPU cycles and petabytes of storage won't go away. Nor will budget concerns, or the cost of power, cooling and space. Here's a look at how vendors, industry groups and savvy IT and facilities planners are meeting those challenges, plus a few ideas that may still be a little blue sky.

Location, location, location

Data centres need power. Lots of it, and at a favourable price.

Facilities also need cooling, since all that electricity going to and through IT gear eventually turns into heat. Typically, this cooling requires yet more electrical power. One measure of a data centre's power efficiency is its PUE (Power Usage Effectiveness) which is the ratio of total power consumed by the facility for IT, cooling, lighting, etc., divided by the power consumed by IT gear. The best PUE is as close as possible to 1.0, PUE ratings of 2.0 are sadly all too typical.

"You want to be in a cool dry geography with cheap power, like parts of the Pacific NorthWest, for example, Facebook's data centre in Oregon. Or in a very dry place, where you can get very efficient evaporative cooling," says Rich Fichera, VP and Principal Analyst at Forrester Research.

Companies like Apple, Google and Microsoft, along with hosting companies, have been sussing out sites that meet affordable power and cooling criteria (along with not being prone to earthquakes or dangerous weather extremes, available and affordable real estate, good network connectivity and good places to eat lunch).

Google, with an estimated 900,000 servers, dedicates considerable attention to data centre efficiency and other best practices, like where and when possible, using evaporative cooling to minimise how often energy-hogging "chillers" run. When in use, chillers "can consume many times more power than the rest of the cooling system combined".

Evaporative cooling still requires power, but much less. Google's new facility in Finland "utilises sea water to provide chiller-less cooling". According to the company, "Google-designed data centres use about half the energy of a typical data centre."

Renewable, carbon-neutral power

In addition to looking for affordability, many planners are looking at power sources that don't consume fuel, or otherwise have a low carbon footprint.

For example, Verne Global is cranking up a "carbon-neutral data centre" in Iceland, currently scheduled to go live November 2011, powered entirely by a combination of hydroelectric sources and geothermal sources, according to Lisa Rhodes, VP Marketing and Sales at Verne Global. About 80% of the power will come from hydroelectric.

Power in Iceland is also abundant, Rhodes points out: "The current power grid in Iceland offers approximately 2900 Megawatts of power capacity and the population of Iceland is roughly 320,000 people. Their utilisation of the total available power is thought to be in the range of 300MW. Aluminum smelters are currently the most power-intensive industry in Iceland, leaving more than sufficient capacity for the data centre industry."

Iceland's year-around low ambient temperatures permit free cooling, says Rhodes. "Chiller plants are not required, resulting in a significant reduction in power cost. If a wholesale client should decide they want cooling at the server, there is a natural cold water aquifer on the campus that can be used to accommodate their needs."

Depending on where the customer is, the trade-off for locating data centers based on power, cooling or other factors can of course be incrementally more network latency, the delay caused by signals travelling through one or several thousands of miles of fibre, plus possibly another network device or two. For example, one way transit from the data centre to London or Europe adds 18 milliseconds, and to the United States about 40 milliseconds.

ISAAC KATO IS INSPIRED BY ICELAND from Inspired By Iceland on Vimeo.

It's not just the heat, it's the humidity

"Dry places" aren't necessarily in cool locations. i/o Data Centers' Phoenix facility, which according to the company is one of the world's largest data centres, is located in Arizona.

"One of the benefits of the desert is it's very dry," says Anthony Wanger, i/o President. "It's easier to remove heat in a dry environment, which makes Arizona an ideal location."

According to the company, the Phoenix facility employs a number of techniques and technologies to reduce energy consumption and improve energy efficiency.

"We are doing everything possible to be energy efficient at all of our data centres, says Wanger. "We separate cold air supply and warm air return." To get the heat away, says Wanger, "There is still no more efficient means of moving energy than through water. Air as a liquid is much less dense and less efficient. Once we get that hot air, we dump it into a closed loop system and exchange it into an open loop system, where we can remove the heat. We also use thermal storage. We can consume energy at night when it's sitting in the utility's inventory."

Also, says Wanger, "We have to put humidity into the air. The old way was to have a big metal halide lamp baking water. The humidification solution was to fire up a heat lamp and phase transition it to humidity. Now we use a process called ultrasonic humidification, which uses high frequency electronic signals to break water surface tension and dissipate cool water vapour into the air, this takes about 1/50th the amount of energy."