Air-conditioning manufacturers will be numbered among the few who won't have welcomed this week's announcement by AMD that it will be building power-reduction technology into its Opteron processors. The same technology as AMD and Intel use in their processors designed for mobile applications, it's a welcome development.

The system, dubbed PowerNow! - where do they get these names from? - is a combination of micro-instructions and OS software that allows the chip to draw as much or as little power as it needs to do the job, depending on what it's doing. As the instructions have been physically present in AMD's Opteron since May, those with recent processors may need only a BIOS update to switch the facility on.

Intel is, in announcement terms at least, already there, having said that it plans to add power-management technology to its Itanium and Xeon processors within the next year.

The alternative - what happens in today's chips - is that the chip will pull the same amount of power down, whether it's idling, just refreshing memory, or doing real work.

If verification were needed, while I was working on a labs-based computer magazine, the labs tested how much power a chip used, depending on workload and used thermal output to measure the power draw. As I recall the results, all used the same amount, no matter how much or how little work they were performing.

Clearly, power-saving technology really comes into its own when building dense server farms, especially those incorporating 1U and 2U rack servers and blades, since they tend to be mounted in close physical proximity to each other.

It makes even more sense when you consider the other big trend in servers right now, which is virtualisation. As we've seen, although there's little or no evidence to suggest that a high-throughput server runs hotter than one that's idling, the hotter any electronic component runs, the shorter its life is likely to be.

If the predictions of growth for virtualisation become reality, then server processor time will become increasingly utilised, with little slack. Instead of one overheated machine giving up the ghost and bringing down one application, which will often be the case as things stand today, it will bring down half a dozen, maybe more. This is not a desirable state of affairs, no matter how many fail-safes, in the form of mirrored components, you deploy.

Better by far to avoid the problem in the first place, as today's servers are using more power than ever before, while being more densely packed than ever too. While cooling is possible, it's not only complex to manage cooling in a dense server farm, it's expensive to run.

So AMD's initiative is to be welcomed, not just because it saves you money on cooling systems and energy, not just because it means AMD and Intel are competing on level ground - which could well have longer term advantages for customers in the form of lower prices - but also because it might just go a small way towards helping reduce the effect of global warming.

That's got to be a good thing: it's the ultimate fail-safe.