Sal Azzaro, director of facilities for Time Warner Cable, is trying to cram additional power into prime real estate at the company's 22 facilities in New York.

"Its gone wild," says Azzaro. "Where we had 20-amp circuits before, we now have 60-amp circuits." And, he says, "there is a much greater need now for a higher level of redundancy and a higher level of fail-safe than ever before."

If Time Warner Cable's network loses power, not only do televisions go black, but businesses can't operate and customers can't communicate over the company's voice-over-IP and broadband connections.

When it comes to the power crunch, Time Warner Cable is in good company. In February, Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University, published a study showing that in 2005, organisations worldwide spent $7.2 billion to provide their servers and associated cooling and auxiliary equipment with 120 billion kilowatt-hours of electricity. This was double the power used in 2001.

According to Koomey, the growth is occurring among volume servers (those that cost less than $25,000 per unit), with the aggregate power consumption of midrange ($25,000 to $500,000 per unit) and high-end (over $500,000) servers remaining relatively constant.

One way Time Warner Cable is working on this problem is by installing more modular power gear that scales as its needs grow. Oversized power supplies, power distribution units (PDU) and uninterruptible power supplies (UPS) tie up capital funds, are inefficient and generate excess heat. Time Warner Cable has started using Liebert's new NX modular UPS system, which scales in 20-kilowatt increments, to replace some of its older units.

"The question was how to go forward and rebuild your infrastructures when you have a limited amount of space," Azzaro says.

With the NX units, instead of setting up two large UPSes, he set up five modules - three live and the other two on hot standby. That way, any two of the five modules could fail or be shut down for service and the system would still operate at 100 percent load.

Other approaches

Some users are trying innovative approaches. One new way of approaching the power issue is a technique called combined heat and power (CHP), or co-generation, which combines a generator with a specialised chiller that turns the exhausted waste heat into a source of chilled water. (See related story.)

Another new approach is to build datacentres that operate off DC rather than AC power. In a typical datacentre, the UPSes convert the AC power coming from the utility's main power supply into DC power, then back into AC again. Then the server power supplies again convert the power to DC for use within the server.

Each time the electricity is switched between AC and DC, some of that power is converted into heat. Converting the AC power to DC power just once, as it comes into the datacentre, eliminates that waste. Rackable Systems has a rack-mounted power supply that converts power from 220-volt AC power to -48-volt DC power in the cabinet, then distributes the power via a bus bar to the servers.

On a larger scale, last summer the Lawrence Berkeley lab set up an experimental datacentre, hosted by Sun Microsystems, that converted incoming 480 VAC power to 380 VDC power for distribution to the racks, eliminating the use of PDUs altogether. Overall, the test system used 10 percent to 20 percent less power than a comparable AC data centre.

For Rick Simpson, president of Belise Communication and Security, power management means using wind and solar energy.

Simpson's company supports wireless data and communications relays in the Central American wilderness for customers including the UK Ministry of Defence and the US embassy in Belise. He builds in enough battery power - 10,000 amp hours - to run for two weeks before even firing up the generators at the admittedly small facility.

"We have enough power redundancy at hand to make sure that nothing goes down, ever," Simpson says. So even though the country was hit by category-4 hurricanes in 2000 and 2001, "we haven't been down in 15 years," he says.

Belise Communications equipment all runs directly off the batteries and UPSes from Falcon Electric.

The electric utility's power is used only to charge the batteries.

Scaling up

While there is a lot of talk lately about building green datacentres, and many hardware vendors are touting the efficiency of their products, the primary concern is still just ensuring you have a reliable source of adequate power.

Even though each core on a multi-core processor uses less power than it would if it was on its own motherboard, a rack filled with quad-core blades consumes more power than a rack of single-core blades, according to Intel.

"It used to be - you would have one power cord coming into the cabinet, then there were dual power cords," says Bob Sullivan, senior consultant at The Uptime Institute. "Now with over 10 kilowatts being dissipated in a cabinet, it is not unusual to have four power cords, two A's and two B's."

With electricity consumption rising, datacentres are running out of power before they run out of raised floor space. A Gartner survey last year showed that half of datacentres will not have sufficient power for expansion by 2008.

"Power is becoming more of a concern," says Dan Agronow, chief technology officer at The Weather Channel Interactive. "We could put way more servers physically in a cabinet than we have power for those servers."

The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment - generators, UPSes, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability - a Tier 4 datacentre - for every kilowatt used for processing, the Uptime Institute says that some $22,000 is spent on power and cooling infrastructure.

In order to cut down on costs and ensure that there is enough power, a close look at each individual component is requisite. The next step is to figure out how each component impacts the datacentre as a whole.

Steve Yellen, vice president of product marketing strategies at Aperture Technologies, a datacentre management software firm, says that managers need to consider four separate elements that contribute to overall datacentre efficiency - the chip, the server, the rack and the datacentre as a whole. Savings in any one of these components yields savings in each of the higher area above it.

"The big message is that people have to get away from thinking about pieces of the system," Stanford University's Koomey says. "When you start thinking about the whole system, then spending that $20 extra on a more-efficient power supply will save you money in the aggregate."

Going modular

There are strategies for cutting power in each area Yellin outlined above. For example, multi-core processors with lower clock speeds reduce power at the processor level. And server virtualisation, better fans and high-efficiency power supplies - such as those certified by the 80 Plus programme - cut power utilisation at the server level.

Five years ago, the average power supply was operating at 60 percent to 70 percent efficiency, says Kent Dunn, partnerships director at PC power-management firm Verdiem and programme manager for 80 Plus. He says that each 80 Plus power supply will save datacentre operators about 130 to 140 kilowatt-hours of power per year.

Rack-mounted cooling and power supplies such as Liebert's XD CoolFrame and American Power Conversion's InfraStruXure cut waste at the rack level. And at the datacentre level, there are more efficient ways of distributing air flow, using outside air or liquid cooling, and doing computational fluid dynamics modelling of the datacentre for optimum placement of servers and air ducts.

"We've deployed a strategy within our facility that has hot and cold aisles, so the cold air is where it needs to be and we are not wasting it," says Fred Duball, director of the service management organisation for the Virginia state government's IT agency, which just opened a 192,000-square-foot datacentre in July and will be ramping up the facility over the next year or so. "We are also using automation to control components and keep lights off in areas that don't need lights on."

Finding a fit

There is no single answer that meets the needs of every datacentre operator.

When Elbert Shaw, a project manager at Science Applications International, consolidated US Army IT operations in Europe for several dozen locations into four datacentres, he had to come up with a unique solution for each location. At a new facility, he was able to put in 48-inch floors and run the power and cooling underneath. But one datacentre being renovated only had room for a 12-inch floor and two feet of space above the ceiling. So instead of bundling the cables, which could have eaten up eight of those 12 inches, blocking most of the airflow, he got permission to unbundle and flatten out the cables. In other instances he used 2-inch underfloor channels, rather than the typical 4-inch variety, and turned to overhead cabling at one location.

"Little tricks that are OK in the 48-inch floor cause problems with the 12-inch floor when you renovate a site," says Shaw.

"These facilities are unique, and each has its own little quirks," Koomey says.

Power management tips

Various experts suggest the following ways of getting all you can from your existing power setup:

  • Don't oversize. Adopt a modular strategy for power and cooling that grows with your needs instead of buying a monolithic system that will meet your needs years down the road.

  • Plan for expansion. Although you don't want to buy the extra equipment yet, install conduits that are large enough to accommodate additional cables to meet future power needs.

  • Look at each component. Power-efficient CPUs, power supplies and fans reduce the amount electricity used by a server. But be sure to look at their impact on other components. For example, quad-core chips use less power than four single chips but may require additional memory.

  • Widen racks. Use wider racks and run the cables to the side, rather than down the back where they block the air flow. Air flows from the front of a server, through the box and out the back. There are no inlet and outlet vents on the sides. It is similar to a PC, and you can put a piece of plywood along the side of server without affecting airflow through the machine. But, if you put the wood along the back, it will overheat.

  • Install a UPS bypass. This is a power cable that goes around the UPS rather than through it. That way, if the UPS is taken offline, there's still a route available for the electricity to flow through and you have power redundancy when you bring a UPS device down for maintenance.