Many data centres are up against the maximum electric power available to them from their utility. Others are facing management challenges: the amount of time to deploy new capacity and to manage existing capacity and systems. And gains made by virtualising and consolidating servers are often lost again as more gear is added in.
The demand for more CPU cycles and petabytes of storage won't go away. Nor will budget concerns, or the cost of power, cooling and space. Here's a look at how vendors, industry groups and savvy IT and facilities planners are meeting those challenges, plus a few ideas that may still be a little blue sky.
Location, location, location
Data centres need power. Lots of it, and at a favourable price.
Facilities also need cooling, since all that electricity going to and through IT gear eventually turns into heat. Typically, this cooling requires yet more electrical power. One measure of a data centre's power efficiency is its PUE (Power Usage Effectiveness) which is the ratio of total power consumed by the facility for IT, cooling, lighting, etc., divided by the power consumed by IT gear. The best PUE is as close as possible to 1.0, PUE ratings of 2.0 are sadly all too typical.
"You want to be in a cool dry geography with cheap power, like parts of the Pacific NorthWest, for example, Facebook's data centre in Oregon. Or in a very dry place, where you can get very efficient evaporative cooling," says Rich Fichera, VP and Principal Analyst at Forrester Research.
Companies like Apple, Google and Microsoft, along with hosting companies, have been sussing out sites that meet affordable power and cooling criteria (along with not being prone to earthquakes or dangerous weather extremes, available and affordable real estate, good network connectivity and good places to eat lunch).
Google, with an estimated 900,000 servers, dedicates considerable attention to data centre efficiency and other best practices, like where and when possible, using evaporative cooling to minimise how often energy-hogging "chillers" run. When in use, chillers "can consume many times more power than the rest of the cooling system combined".
Evaporative cooling still requires power, but much less. Google's new facility in Finland "utilises sea water to provide chiller-less cooling". According to the company, "Google-designed data centres use about half the energy of a typical data centre."
Renewable, carbon-neutral power
In addition to looking for affordability, many planners are looking at power sources that don't consume fuel, or otherwise have a low carbon footprint.
For example, Verne Global is cranking up a "carbon-neutral data centre" in Iceland, currently scheduled to go live November 2011, powered entirely by a combination of hydroelectric sources and geothermal sources, according to Lisa Rhodes, VP Marketing and Sales at Verne Global. About 80% of the power will come from hydroelectric.
Power in Iceland is also abundant, Rhodes points out: "The current power grid in Iceland offers approximately 2900 Megawatts of power capacity and the population of Iceland is roughly 320,000 people. Their utilisation of the total available power is thought to be in the range of 300MW. Aluminum smelters are currently the most power-intensive industry in Iceland, leaving more than sufficient capacity for the data centre industry."
Iceland's year-around low ambient temperatures permit free cooling, says Rhodes. "Chiller plants are not required, resulting in a significant reduction in power cost. If a wholesale client should decide they want cooling at the server, there is a natural cold water aquifer on the campus that can be used to accommodate their needs."
Depending on where the customer is, the trade-off for locating data centers based on power, cooling or other factors can of course be incrementally more network latency, the delay caused by signals travelling through one or several thousands of miles of fibre, plus possibly another network device or two. For example, one way transit from the data centre to London or Europe adds 18 milliseconds, and to the United States about 40 milliseconds.
ISAAC KATO IS INSPIRED BY ICELAND from Inspired By Iceland on Vimeo.
It's not just the heat, it's the humidity
"Dry places" aren't necessarily in cool locations. i/o Data Centers' Phoenix facility, which according to the company is one of the world's largest data centres, is located in Arizona.
"One of the benefits of the desert is it's very dry," says Anthony Wanger, i/o President. "It's easier to remove heat in a dry environment, which makes Arizona an ideal location."
According to the company, the Phoenix facility employs a number of techniques and technologies to reduce energy consumption and improve energy efficiency.
"We are doing everything possible to be energy efficient at all of our data centres, says Wanger. "We separate cold air supply and warm air return." To get the heat away, says Wanger, "There is still no more efficient means of moving energy than through water. Air as a liquid is much less dense and less efficient. Once we get that hot air, we dump it into a closed loop system and exchange it into an open loop system, where we can remove the heat. We also use thermal storage. We can consume energy at night when it's sitting in the utility's inventory."
Also, says Wanger, "We have to put humidity into the air. The old way was to have a big metal halide lamp baking water. The humidification solution was to fire up a heat lamp and phase transition it to humidity. Now we use a process called ultrasonic humidification, which uses high frequency electronic signals to break water surface tension and dissipate cool water vapour into the air, this takes about 1/50th the amount of energy."
The mod pod
For several years now, a growing number of vendors, like HP and Microsoft have been offering ready-to-compute modules that not only include compute, storage but also cooling gear, just plop (well, put gently) into place, and connect up power, connectivity and whatever cooling is needed.
Some don't even need a proper data centre to house them in.
And it's not just vendors, either. Hosting providers like i/o Data Center not only use their own modules, but also offer them directly to customers who might not be availing themselves of i/o's facilities.
For example, HP offers its Performance Optimised Datacenter 240a, a.k.a. "the HP EcoPOD." Amazon has its own Perdix container, and Microsoft offers its Data Center ITPAC (IT Pre-Assembled Components).
HP's EcoPOD uses free air and DX (direct expansion) cooling, without needing any chilled water. "Just add power and networking, in any environment," says to John Gromala, director of product marketing, Modular Systems, Industry Standard Servers and Software, HP.
According to Gromala, "the EcoPOD optimises efficiency achieving near-perfect PUE between 1.05 to 1.30 (depending on ambient conditions)." And, says Gromala, "because EcoPODs are freestanding, they can be deployed in as quickly as three months. Customers are putting EcoPODs behind their existing facilities, inside warehouses or on roofs."
Switching from AC to DC
IT gear runs on DC (direct current), but utilities provide electricity as AC (alternating current).
Normally, "A UPS converts the three phase 480vAC coming from the power utility to DC, to charge its batteries, and then reconverts back to three phase 480vAC to send it through the data centre. The PDU (Power Distribution Unit) for each rack or row of racks converts the 480vAC to 208vAC, which is what normally goes into IT gear like servers and storage arrays. And the power supplies in the IT gear converts that 208vAC into 380vDC," says Dennis Symanski, Senior Project Manager, Electric Power Research Institute and chairman of the EMerge Alliance's committee writing the 380vDC standard.
Various initiative are underway exploring going, ahem, directly from utility power. "We've done a lot of demos worldwide about running data centres at 380vDC (volts of Direct Current) instead of 208vAC," says Symanski.
Moving to a direct current infrastructure says Symanski, "gets rid of three conversion steps in the electrical system, and also reduces the load on the air conditioning by the reduced amount of heat being created.
What's that mean in terms of dollar savings? "We've found in most of our demonstrations that we get about a 15% reduction in the power used to run IT equipment. Plus the savings from needing less air conditioning, which are probably comparable, but harder to measure.
Since a DC infrastructure means DC UPSs, DC circuit breakers, DC interconnect cables, et etc., data centres are unlikely to convert existing AC setups, other than as testbeds, says Symanski. "This is for when you are expanding in your data centre, like adding a new row of racks or building a new data centre."
Switch to power-saving components
There are many opportunities to reduce power consumption simply by replacing some of the components in existing power and cooling systems.
i/O Data Centers, for example, "uses variable frequency chillers, pumps, cooling towers and air handlers to reduce energy consumption. By using only the power necessary to keep equipment running at optimal levels, i/o is able to operate energy efficient data centres."
"You don't change the fan or the motor, you put a VSD on the motor. What used to be a single speed fan you can now slow down," notes EPRI's Symanski. "And by reducing the speed of a fan by 50%, with a variable speed drive (VSD), you use only one eighth of the power," However, Symanski cautions, "You have to make sure you don't get condensation and that the refrigerant doesn't freeze by slowing down too much."
There's even one easy component upgrade that can be done with some existing IT gear, Symanski points out: Replacing older power supplies with one of the new energy efficient ones with certifications like 80PLUS and Energy Star.
"New power supplies may come in different versions, Bronze, Silver, Gold and Platinum, with correspondingly better efficiencies," Symanski notes. "Replacing an older power supply with a Platinum-level one can yield ten to fifteen percent energy savings, and the power supply is an inexpensive part."
Crazy like a fox, or just crazy?
So far, everything you've read is available and being done, or at least being explored in test conditions. But why stop there when there's still room for further improvement? Here are a few blue-sky ideas...
I take full credit and/or blame for this idea. Why not put servers inside turbine wheels, and drop them tethered by coax fibre cables into the water. The water motion on the turbine supplies power, the water movement keeps the server cool. For maximum heat exchange, (and to avoid buoyancy problems), use liquid immersion cooling on the servers, like from Hardcore Computer. For extra credit, being careful to put wire mesh screens around the servers, farm salmon, clams and/or tilapia, since the water may be warmer than otherwise.
Speaking of location, with air-based power generation being developed, how about airborne data centre modules, generating power and getting air-cooled without consuming ground footprint? Granted, an easy target for air pirates armed with six foot bolt cutters. Or even larger ones in lighter-than-air dirigible housings?
My favourite suggestion comes from Perry Szarka, solution consultant at system integrator MCPc: "How about a combination micro-brewery and data centre? The idea would be for the beer to participate in the data centre cooling process somehow as it goes through the distillation/fermentation process."
"The microbrewery side could perhaps feature a bar where patrons could view the beer through large windows and clear glass tubing as it moves along through the system. I guess this could be the ultimate data centre liquid cooling concept, at least for the administrators who I have met!"
What innovative practices are you seeing? Or do you have some blue sky ideas of your own?