Anyone who has walked around a server room will know how the temperature can vary - in some places it can be very warm, while over by the aircon it is cold enough to raise goosebumps. Those hotspots are not good for the equipment, while coldspots show the inefficiency of the aircon setup.

These differences are symptoms of how we deal with the heat thrown out by our technology, and there are both expensive and inexpensive ways to fix them. The usual response is to simply cool the room down until every individual box within it reaches an acceptably low temperature, but this is a tactic that cannot last much longer, says Tony Day, APC's chief engineer for rack cooling solutions.

He says that as long as the power consumed by - and therefore thrown back out as heat by - each rack does not exceed 2kW or 3kW, the scheme of feeding chilled air into the underfloor void to be drawn up through perforated floor tiles should work fine. The problem is that the advent of ever higher density equipment, especially bladeservers and 1U pizza-box types, can bring the heat output to 10kW per rack or even more.

"Tons of R&D money goes into understanding how to dissipate the heat from a chip," he adds. "More goes at the server level, into cooling the box, but once you've put the server in a rack nobody cares because it's not their problem any more."

Shocking thermals
Higher density can mean that the servers at the bottom of the stack suck up all the cold air from the underfloor aircon, leaving those at the top to take in warm air instead - especially if the racks are lined up front to back, so that the front of one row absorbs the exhaust of the next. "Failures occur mostly at the top of the rack, because they don't get cool air," Day says.

The thorough, but expensive, way to fix the problem is to build the cooling airflow into the rack. This is the route taken by APC with its InfraStruXure range: these include fans and heat exchange coils fed with chilled water. The airflow inside the rack is horizontal, rather than vertical as with the underfloor scheme, so the equipment is cooled evenly.

In-rack cooling from APC and its competitors can get up to 15kW or 20kW today, with 40kW versions on the horizon. Plus the rack can be sealed and have its own fire detection and power distribution systems, so it is room-neutral and doesn't even have to be in a datacentre.

Cheaper solutions
However, Tony Day says that it is often possible to solve the problem more cheaply, as long as you can stand some physical disruption. "Some datacentres don't need new equipment - they need rearrangement," he explains. This could mean realigning the racks so you have cold aisles and warm aisles, for example, or moving the aircon boxes so the room is cooled more evenly and without hotspots emerging.

In some cases, there may even be too much cooling airflow, and faster airflow creates lower pressure so less is drawn up through the racks. "Air is a peculiar medium," he says. "Obstructions can cause it to go off in odd directions. Sometimes taking aircon units out can solve the problem - less is more!"

Looking forward, he predicts that despite lower voltages and power saving technologies, processors such as Intel's next generation Itanium will not be a lot different. And he says datacentre users are going to have to focus much more on this in the future, and less on the cost of floorspace.

"Higher density is going to cost you more - you can't run 15kW per rack for the same price as 7kW. People have mislead themselves by apportioning cost per square metre."