How green is your data centre? If you don't care now, you will soon. Most data centre managers haven't noticed the steady increase in electricity costs, since in most cases they don't see those bills. But they do see the symptoms of surging power demands.

High-density servers are creating hot spots in data centres that have surpassed 30 kilowatts per rack for some high-end systems. As a result, some data centre managers are finding that they can't get enough power distributed out to those racks on the floor. Others are finding that they've maxed out the power utility's ability to deliver additional capacity to their location.

Ken Brill, founder and executive director of The Uptime Institute Inc., sees the beginnings of a potential crisis. "The benefits of [Moore's Law] are eroding as the costs of data centres rise dramatically," he says. Increasing demand for power is the culprit, driven by both higher power densities and strong growth in the number of servers in use. Server electricity consumption in data centres has quietly doubled in the past five years, according to study sponsored by Advanced Micro Devices Inc. that was conducted by John Koomey, a consulting professor at Stanford University and a staff scientist at Lawrence Berkeley National Laboratory.

Server performance is improving faster than energy efficiency is advancing. "If we're going to get energy efficiency rising faster than the rate of performance increase, we're going to have to do something radically different than what we're doing today," Brill says.

Fortunately, there are many steps that data centre managers can take to start reducing power consumption in existing data centres without making a huge investment -- or sacrificing performance or availability.

1. Consolidate, consolidate, consolidate

Consolidating servers is a good place to start. In many data centres, "between 10 percent and 30 percent of servers are dead and could be turned off," Brill says.

Removing one physical server from service saves $560 annually in electricity costs, assuming a cost of 8 cents per kilowatt-hour, says Bogomio Balkansky, director of product marketing for Virtual Infrastructure 3 at VMware Inc. in Palo Alto, Calif.

Once idle servers have been removed, data centre managers should consider moving as many server-based applications as feasible into virtual machines. That allows IT to substantially reduce the number of physical servers required while increasing the utilisation levels of remaining servers.

Most physical servers today run at about 10 percent to 15 percent utilisation. Since an idle server can consume as much as 30 percent of the energy it uses at peak utilisation, you get more bang for your energy buck by increasing utilisation levels, says Balkansky.

To that end, VMware is working on a new feature associated with its Distributed Resource Scheduler that will dynamically allocate workloads among physical servers in a resource pool to maximize energy efficiency. Distributed Power Management will "squeeze virtual machines on as few physical machines as possible," Balkansky says, and then power down servers that aren't in use. It will make adjustments dynamically as workloads change. Workloads might be consolidated in the evening during off hours, for example, then reallocated across more physical machines in the morning, as activity increases.

2. Turn on power management

Although power management tools are available, administrators don't always use them. "In a typical data centre, the electricity usage hardly varies at all, but the IT load varies by a factor of three or more. That tells you that we're not properly implementing power management," says Amory Lovins, chairman and chief scientist at Rocky Mountain Institute in Snowmass, Colo. Just taking full advantage of power management features and turning off unused servers can cut data centre energy requirements by about 20 percent, he adds.

That's not happening in many data centres today because administrators focus almost exclusively on uptime and performance and aren't comfortable with available power management tools, says Christian Belady, distinguished technologist at Hewlett-Packard Co. But turning on power management can actually increase reliability and uptime by reducing stresses on data centre power and cooling systems, he says.

Vendors could also do more to facilitate the use of power management capabilities, says Brent Kerby, Opteron product manager on AMD's server team. "Power management technology is not leveraged as much as it should be," Kerby says. "In Microsoft Windows, support is inherent, but you have to adjust the power scheme to take advantage of it." Instead, he says, that should be turned on by default.

You can realise significant savings by leveraging power management in the latest processors. With AMD's newest designs, "at 50 percent CPU utilisation, you'll see a 65 percent savings in power. Even at 80 percent utilisation, you'll see a 25 percent savings in power," just by turning on power management, says Kerby. Other chip makers are working on similar technologies.

But power management can cause more problems than it cures, says Jason William, chief technology officer at DigiTar, a messaging logistics service provider in Boise. He runs Linux on Sun T2000 servers with UltraSparc multicore processors. "We use a lot of Linux, and [power management] can cause some very screwy behaviours in the operating system," he says.

3. Upgrade to energy-efficient servers

The first generation of multicore chip designs resulted in a marked decrease in overall power consumption. "Intel's Xeon 5100 delivered twice the performance with 40 percent less power," says Lori Wigle, director of server technology and initiatives marketing at Intel Corp. Moving to servers based on these designs should increase energy efficiency. (Future gains, however, are likely to be more limited. Sun Microsystems Inc., Intel and AMD all say they expect power consumption to remain flat in the near term.)

4. Use high-efficiency power supplies

Power supplies are a prime example of the lack of focus on total cost of ownership in the server market. Inefficient units that ship with many servers today waste more energy than any other component in the data centre, says Koomey, who led an industry effort to develop a server energy management protocol.

Inefficient power supplies can waste nearly half of the power before it gets to the IT equipment. Moreover, every watt of energy wasted by the power supply requires another watt of cooling system power just to remove the resulting waste heat from the data center.

To make matters worse, server manufacturers have traditionally overspecified power needs, opting for a 600-watt power supply for a server that really should only need 300 watts, says Rich Hetherington, chief architect and distinguished engineer at Sun. "At that level, [the power supply is] at its most inefficient operating point. The loss of conversion is huge. That's one of the biggest sinners in terms of energy waste," he says.

Power supplies are available today that attain 80 percent or higher efficiency even at 20 percent load, but they cost more. Moving to these more energy-efficient power supplies reduces both operating costs and capital costs, however. "If they spent $20 on [an energy- efficient] power supply, you would save $100 on the capital cost of cooling and infrastructure equipment," Lovins says. Any power supply that doesn't deliver 80 percent efficiency across a range of low load levels should be considered unacceptable, he says.

5. Break down internal barriers

Although IT has carefully tracked performance and uptime, most IT organisations aren't held accountable for energy efficiency because the IT function is "stovepiped" from the facilities group. IT generates the load, but facilities gets the power bill, says Brill. Breaking down those barriers is critical to understanding the challenge and providing a financial incentive for change.

The stovepiping problem has also afflicted IT equipment vendors, says Lovins. Engineers are now specialised, often designing components in a vacuum without looking at the overall system -- or data centre -- in which their components will play a role.

"The design process that used to optimise a whole system for multiple benefits got sliced into pieces, each with one specialist designing one component or optimising a component for single benefits," Lovins says. "When the integration was lost, we were less able to see how an integrated design could eliminate noticeable losses."

6. Follow the standards

Several initiatives are under way that may help users identify and buy the most energy-efficient IT equipment. A certification program called 80 Plus, which was initiated by electric utilities, lists power supplies that consistently attain an 80 percent efficiency rating at load levels of 20 percent, 50 percent and 100 percent.

Under a congressional mandate, the Environmental Protection Agency is working with Lawrence Berkeley National Laboratory to study ways to promote the use of energy-efficient servers. An Energy Star specification could be in place later this year.

The nonprofit Standard Performance Evaluation Corp. is also working on a performance-per-watt benchmark for servers that should help provide a baseline for energy-efficiency comparisons. The specification is slated for release this year.

7. Advocate for change

IT equipment manufacturers won't design for energy efficiency unless users demand it. Robert Yale, principal of technical operations at The Vanguard Group Inc. in Valley Forge, Pa., says his company is involved with The Green Grid and other industry organisations to push for greater energy efficiency.

Joseph Hedgecock, senior vice president and head of platform and data centres at Lehman Brothers Inc., says his company has been lobbying vendors for more efficient server designs. "We're trying to push for more efficient power supplies and ultimately systems themselves," he says.