IT industry people find themselves highly attracted to elegant solutions. This usually means finding a methodology that's as simple as it can get while still delivering the right result. Simple is good; complex is bad.

That's precisely the argument put forward by those who propose that data centres should switch to DC power. It'll save energy, create less heat, and it means fewer power conversion processes. So what's not to like?

My Ton, senior project manager at Ecos Consulting, argues that companies can cut power usage by more than 20 per cent by retooling their data centres to run on DC. That's because in a typical data centre, power arrives at the server room as AC, where it's converted to DC so it can be stored in uninterruptible power supplies. It's then converted back to AC so it can be transported to other parts of the facility and then back again to DC by the power supply inside each server, router, load balancer, or other piece of networking equipment.

Each time you convert power, there's a cost, both in terms of efficiency, and in terms of heat generated. And as we've all come to appreciate over recent years, heat is bad. The less of it you make in a data centre, the better.

Ecos reckons that, if telcos can run their switching centres on DC, so can enterprises. It's an argument that has -- not quite raged so much as simmered -- for some time. All that stands in the way of widespread usage, says Ton, are vendor standards for plugs, cords, rectifier units, and other DC gear.

But, as ever, it's not quite as simple as that.

The first and perhaps most entrenched problem is that energy savings rarely come back to the IT manager's budget. Facilities pays that, so the incentive to improve efficiency is more to do with the problem of finding an air-con unit big enough to do the job.

And when you've done that, according to APC's chief technical officer Neil Rasmussen, buying equipment -- air-con and other plant -- that's big enough, not too big is crucial. "The biggest problem is over-sizing -- it all uses power. Even under light loads, if you've over-specified you'll use more power than right-sized equipment. And the average load of IT hardware in a data centre is only about 30 per cent."

APC, which specialises in UPSes and rack-mounted cooling systems, has conducted an analysis of DC power systems and, using its power modelling algorithms, compared them for efficiency with traditional AC power. Published in its white paper, AC vs DC Power Distribution for Data Centers, the raw numbers suggest that the highest efficiency a pure AC power distribution system will attain is 77.3 per cent -- that's using a high voltage AC system running at full load -- while the corresponding number for a 48V DC system is 73.6 per cent. Both figures fall as the loads drop.

High voltage DC is slightly more efficient than AC however. But, to run DC current throughout the data centre would mean the conversion in some way of every piece of equipment in it -- a hefty upfront hardware cost. Swapping hardware in and out would become a far more complex task. And DC cables, which at high voltages are heavy, thick and inflexible, are not only awkward to handle and route, they're also more dangerous. Costs associated with wiring plant would increase as a result.

APC does however agree that: "There are clear trends such as blade servers that suggest that AC/DC supplies powering multiple CPUs within a rack is an approach that will exist in future data centres." However, the white paper concludes that: "The flexibility and compatibility of AC power suggests that it will be the standard for power distribution for network rooms and data centres."

Meanwhile, argues Rasmussen, what's urgently required is more communication between facilities managers and data centre managers to improve the understanding of each other's requirements. Otherwise, power consumption and all the issues around that could will continue falling between the cracks. With long-term energy prices set to rise, that's not a situation any company can afford to ignore.