Power and cooling are now key issues facing not just data centres but computing as a whole: just how do you extract more compute power from silicon technology, which is approaching its thermal limits? With no white knight technology galloping round the corner to replace today's CMOS technology, the answer may lie in more efficient cooling techniques, using the micro-technology that's used inside the chips themselves.

IBM's Zurich labs, the place where the scanning tunnelling microscope was born, contains a team of scientists working on that problem, so we visited the lab to find out what solutions they're working on and when we could expect to see the results of their labours.

For them, chips based on nano-technology are old hat: today's chips are all effectively nano-technology in scale, as they all contain features measured in nanometres. But this doesn't help the cooling problem much: by pushing transistor sizes down you use less power but you also increase the energy density. And it's this that's causing the problem faced by all chip designers.

Today's data centre-scale cooling systems revolve around massive air-conditioning units that cost as much energy to run as the computers they're designed to cool. With power bills heading skywards, that's no longer a viable long-term strategy. IBM's Zurich team of thermal packaging experts, headed by Dr Bruno Michel, is working on a fix for the problem - and they're using micro-technology to do it.

Designers surprised

For Michel, the problem that took chip designers by surprise was power density. It's made worse by a problem called passive power, which is caused by the fact that, as transistor sizes diminish, current leakage across gates that are supposed to be closed increases. As a result, even an idling chip can use almost as much current as one that's running at full tilt.

Last time we hit this problem, according to Michel, the industry switched from bi-polar to CMOS technology at the cost of a 10x performance hit. Now we're where bi-polar was but there's no alternative technology ready to replace it. Today's approach is to cut clock speed and add cores, which means innovation is about packaging not chip technology, measured in performance/watt -- although there isn't currently a universal measurement or benchmark for this metric.

A big chunk of the problem lies in the fact that not all elements of the chip are equally hot, which makes chips harder to cool efficiently. For instance, the logic and IO areas run hottest while cache memory is coolest. And chip designers prefer to pack elements tightly together in order to shorten signal paths, and that makes the cooling job harder. Michel reckons that chip designers need to think about hotspots and take the performance hit that moving components apart could entail in order to gain improved cooling. But with today's designs, the best he can do is to work with what's out there.

The trouble is that aircon systems in today's data centres are applying cooling far too late, reckons Michel. Air is used for cooling because it's three to five times cheaper than liquid, but it's approaching its limits, both in terms of its ability to cool and the noise it generates. If instead you remove heat much closer to the source, says Michel, the energy requirement is lowered, and improved efficiency means chips generating up to 300W/cm2. can be cooled; today's hottest chips are around 100W/cm2.

So instead of using air, liquid cooling should be used at the chip level. And the best way to do this, he reckons, is to use micro-technology to apply jets of water to flush away waste heat.

The system working in the labs uses micro-channels 30-50 microns wide with about 50,000 nozzles per chip with 100 microns between nozzles, with parallel manifolds, one for cold water in, the other for waste heat. This system is applied directly to the back of the chip using what's called jet impingement cooling -- the plan is to do away with the packaging in between the cooling system and the silicon. "We want to get as close to the source of heat as possible", says Michel.

The team doesn't seem to have any problems ensuring that water doesn't go where it's not supposed to -- always a problem of course for electronics applications for liquid cooling -- since Michel reckons the benefits of using water, with its high heat capacity, rather than freon or other coolant outweigh any disadvantages. I'd guess the fact that it's cheap helps too.

So when are we likely to see the fruits of Michel and his team's labours ? He says that research into liquid micro-cooling is still in research phase: "We're preparing the ground, but want to get to product quickly. We're in contact with product units within IBM and we do have direct feedback but we're only working on demonstrators right now. It's not a short term problem."

The economics of such a system could prove irresistible. It potentially uses one-twentieth of the energy of today's cooling systems - that's a decimation of the data centre's cooling costs. It would seem, although he didn't say so, that the main problem standing in Michel's way is finding a way to make it manufacturable on a mass scale.

When that happens, expect a deluge. And further out into the future, Michel reckons that "3D packaging or stacked chips will arrive at some point, combining the processor with memory. While RAM makes less heat, how will we cool the CPU then? We've also started to introduce nano-particles into fluids to improve thermal conductivity."

But until then, Michel has a piece of advice for chip designers: "Thermal design has to be moved higher up the design hierarchy, we need to improve modelling software to predict heat dissipation so that you know chip designs are coolable. We had to learn the hard way."