"Escalating power and cooling demands in the data centre are the biggest problems that I have consistently seen at client sites in the last year, with network routers and switches as the second largest contributors," says Jerry Murphy, senior VP and director of research operations with Robert Frances Group and former analyst with Meta Group.

"I call it the dark underbelly of Moore's Law. Some clients are to the point that the power companies cannot supply enough electricity to them, and they have to have their own generating plants."

Dean Davison, vice president of strategic sourcing at Nautilus Advisors and former Meta Group outsourcing guru, says that power/cooling density problems have become a common reason for corporations outsourcing their data centres in recent years, as their present sites are simply outmoded by the exploding power and air cooling demands of the latest processors.

The basic problem, says Murphy, is that as chips double in processing power every 12 to 18 months, they also double in power demand. And the more power they consume, the more heat they generate.

"This wasn't a problem when a processing chip drew two watts of power and a year later the new one drew four watts," Murphy says. "But when you go from 128 watts to 256, it becomes more of a problem."

And the problem is not just the processors themselves. The transformers that change the AC power from the wall to the DC that computers need also generate a lot of heat, particularly when computer makers trying to cut costs use inefficient, cheap transformers.

Often, these are mounted directly on the system boards, concentrating the heat generation further. This also increases operating costs for the data centre. If the data centre spends $1 million a year on power but the transformers are only 70 percent efficient on average, then the company is wasting $300,000 a year.

New designs -- particularly blade servers, which can increase the concentration of processing power, and therefore power demand, eight times per square foot -- compound the problem by packing multiple processors and transformers in a much smaller space.

And more power means an equal increase in cooling demand. Data centre executives see cooling costs reaching 50 percent to 100 percent of the cost of the power that the systems they cool consume.

The manufacturers are now beginning to respond to the problem.

Processor manufacturers are developing lower-power chips -- AMD and Intel have announced that their next generation processors will draw less power. Sun has announced Cool Thread and IBM is working with PA Semiconductor to develop a new generation of Power PC chips that cut power consumption by 75 percent over the present generation.

Builders are moving to more efficient transformers, and the latest designs in blade systems take the individual transformers off the boards and instead use a single efficient transformer for the entire rack.

While these will provide some relief in the future, here are some things that Murphy suggests can be done today, to minimise the problem created by the concentration of network switches and routers. The latter have become the second largest source of heat and power consumption problem in the data centre after blades:

1. Portfolio management: Conduct a thorough examination of the application portfolio to identify unnecessary software that can be eliminated, allowing the data centre to decrease the number of servers.

2. Consolidation: As part of the portfolio management effort, identify opportunities to combine applications on a single server, further cutting down on heat generation. One partitioned server running at 80 percent generates half the heat of two running at 40 percent each.

In fact, servers consume about the same energy regardless of the CPU utilisation, so consolidating servers in this manner can significantly reduce power consumption.

3. Decentralise switches: Consider moving network switches out of the data centre.

4. Redesign network switch and router layout: Often data centre managers keep space between switch racks under the impression that this promotes air circulation. But eddies can form and fill those gaps, recycling the hot air from the back of one into the front of the next.

Stacking switches closer together can improve cooling, or if that is not possible, using baffling panels can break up the eddies, allowing more efficient cooling.

5. Dedicated cooling appliances: Cool air is often pumped to stacks of network switches through the floor with the expectation that it will rise through the racks. But actually cool air falls and hot air rises, with the result that the warmer air from the bottom switch is sucked into the next one up, etc., with a 20 degree rise from front to back in each switch.

To fix this, companies are using air exchangers that attach to each switch, sucking the hot air into ducts that take it back to the air conditioner rather than leaving it to rise up through the rack. This can increase mean time between failures for the switches at the top of the rack significantly.

6. Push your vendors: They need to hear from customers that the problem is becoming urgent so that they will refocus more attention on power consumption.

These steps won't completely fix the problem, and organisations that are growing rapidly can still outgrow the capacity of their data centres to provide power and cooling.

For those companies, the choice may be to rebuild the existing data centre, build a second centre to take part of the load, or outsource all or part of the computing load and let a vendor worry about the problem.

But for many shops, these measures can help keep things under control until new, lower-power and cooler generations of equipment come to the market.