Increasing power densities of networked storage, communications equipment and servers all contribute to power and cooling problems in the data centre. However, servers have become the biggest issue because server farms have grown so large relative to everything else.

Within the server, increasing power budgets for processors are one of the biggest problems, says Nathan Brookwood, principal at Insight64. "When the first Pentium chip came out, everyone was horrified because it used 16 watts," substantially more than the 486, he says. Today, the worst offenders are at 150 watts, and 70 to 90 watts is common. Processor inefficiencies reached critical mass when chips moved to 90nm technology. "The leakable current for transistors became a meaningful part of overall power consumption," he says. As heat loads were going up, increasing voltage and frequency was no longer an option.

"Power was going up at an alarming rate," says Randy Allen, corporate vice president at chip maker Advanced Micro Devices Inc.. New dual-core chips from Intel and AMD may give IT a bit of breathing room. Intel says its Woodcrest generation of processors, due later this year, will cut thermal design power by 40 per cent over current single-core designs, while AMD claims its dual-core chips offer 60 per cent faster performance within the same "power envelope" as its single-core chips. Both also offer capabilities to step down power levels when utilisation levels drop. For example, AMD claims that a 20 per cent drop in performance level will cut power levels on its newest chips by 50 per cent.

While dual-core and future multi-core designs may give data centre managers a bit of breathing room, they won't stop the power-density crisis. While performance per watt is going up, total wattage per chip is staying at the same or slightly higher levels and will continue to climb. Allen likens the battle to trying to climb a falling ladder. "The fundamental issue here is that computational requirements are increasing at a faster rate" than efficiency gains are rising, he says.

Beyond the processor, there are areas of inefficiency within the server itself. For example, vendors commonly put inefficient power supplies in high-volume x86-class server because they don't see a competitive advantage in putting in more-efficient components that add a few dollars to their server costs. Jon Koomey, consulting professor at Stanford University who has studied data centre power issues for the industry, calls this a "perverse incentive that pervades the design and operation of data centres." Commonly used power supplies have a typical efficiency of 65 per cent and are a huge generator of waste heat. Units with efficiencies of 90 per cent or better will pay for themselves in reduced operating costs over the life of the equipment. There's a second benefit as well, says Koomey. Data centre managers who are purchasing thousands of servers could also design new data centres with a lower capital investment in cooling capacity by purchasing more efficient servers. "If you are able to reduce power use in the server, that allows you to reduce the capital cost of the equipment and you can save costs upfront," says Koomey. Power supplies are a simple way to gain efficiency, he adds, because they can be inserted into existing system designs without modification.

"For IT managers building data centres with thousands of servers, the performance per watt will be an absolutely critical buying criteria," Allen says. Unfortunately, there are no benchmarks to help data centre managers make such determinations, and power ratings on servers do not reflect real-world power-consumption levels. "If you want to buy [more-efficient] servers, there is no objective power measurement," says Koomey. He is working with Sun Microsystems, which hopes to eventually bring together a consortium of vendors to develop performance-per-watt benchmarks.

Large improvements are also possible in room air-conditioning efficiency, says HP's Brian Donabedian. One of the biggest costs is the power required to run all of the fans that keep the room temperature balanced. Variable-speed blower motors are a relatively easy retrofit that can cut fan power consumption -- and related heat that's generated -- by 70 per cent, he says. Poorly located air conditioners can end up fighting one another, creating a low-pressure, dead-air space in the middle of the room where little or no cold air comes out of perforated tiles in the cold-air aisles. "It's like a tornado," he says. On the other hand, putting high-density racks too close to air-conditioning units gives the racks less cool air because the pressure below the vent tiles is lower, he says. Placing them farther away may sound counter-intuitive, but actually allows for greater efficiency, he says. Even when air is delivered to the rack, a failure to install blanking plates and plug cable-entry points drops air pressure within the rack, reducing air circulation and cooling efficiency.

Air-conditioning units need to cooperate better, says Steve Madara, vice president at Emerson Network Power. Most units in data centres operate as "isolated islands" today, but they are increasingly being designed to work as a single unit, he says. That avoids common-room balance problems, such as when one unit humidifies one side of the room while the other is dehumidifying it.

Where does the power go?

In a typical data centre, every watt of power consumed by IT equipment requires another watt of power for overhead, including losses in power distribution, cooling and lighting. Depending on efficiency, this "burden factor" typically ranges from 1.8 to 2.5 times more power.

Assuming a 1:1 ratio, a 3MW data centre will require 6MW of power to operate. At 6 pence per kilowatt-hour, that adds up to £3.15 million annually. However, some companies could be paying up to 12 pence per kilowatt-hour, which would double the cost. With those numbers, even a modest 10 per cent improvement in efficiency can yield big savings.

With average per-rack power consumption tripling over the past three years, skyrocketing power bills are turning the heads of chief financial officers, particularly in companies with large data centres. Such scrutiny is less prevalent at financial institutions, where reliability is still the most important factor. But other industries, such as e-commerce, are much more sensitive to the cost of electricity, says Peter Gross, CEO of EYP Mission Critical Facilities.

How many servers does it take to hit 3MW? Assuming today's average of 5KW per rack, you would need 600 cabinets with 15 servers per enclosure, or 9,000 servers total. A new data centre designed for 100 watts per square foot would require 30,000 square feet of raised-floor space to accommodate the load.

Adding it up




























Power required by data centre equipment
3mw

Power-distribution losses, cooling, lighting
3mw

Total power requirement
6mw

Cost per kilowatt-hour
£0.06

Annual electricity cost for 24/7 operation
£3.15 million

Annual savings from a 10% increase in efficiency
£315,000