Are data centres reaching capacity limits? According to some vendors -- notably infrastructure supplier American Power Conversion (APC) -- most data centres are approaching the limit of their expansion.

APC's CTO Neil Rasmussen posits that data centres are very inefficient. This situation arises, argues Rasmussen, from the miscommunication between facilities providers, such as buildings or facilities managers, and the IT manager. IT departments know how much hardware is or will be installed and have a fair idea of how much power it will draw. However, they can't communicate that directly to the facilities department because they don't share a lexicon for describing data centres -- articulating such values as volumes and location of cooling systems, for example.

He argues that existing measurements have failed the test of time, and that the result is inefficiency in the data centre. "To solve that problem we need to make an effective data centre description language that’s concise, easy to understand, and is unambiguous and actionable. Anyone should able to build a solution given the description," says Rasmussen.

For him, efficiency is the key metric -- that is, power input measured against IT loads, which involves comparing the amount of power a data centre draws in toto, which includes not just the IT systems of course, but also cooling and other ancillary systems with the amount required by the computers themselves. The result is a ratio.

Rasmussen argues that the closer to 100 per cent utilisation that both the cooling and computing systems get, the more efficient they are in terms of computing output per watt consumed. At the moment, says Rasmussen, the tendency is to over-specify cooling systems while the computers run at low utilisation levels.

He argues that, if you want to specify for future expansion, it's very inefficient, for example, to specify twice the amount of cooling equipment required. It consumes more power in the longer term than it would cost to re-specify when the load doubles.

One of APC's customers, Dr Tim Bardsley, IT manager at the Wellcome Trust Centre for Human Genetics (WTCHG), has populated his data centre with the company's rack systems with the aim of providing extensible cooling. The WTCHG undertakes research into the genetics of diseases and is part of the University of Oxford's Division of Medical Sciences which makes considerable use of high performance computing systems.

Bardsley agrees that it is easy for IT departments' demands to get lost between original specification and final outcome when new build-out is planned. This results from the large number of processes and departments through which such a project has to pass, where an understanding of the reasons for the specification becomes muddied.

He argues, however, that you cannot generalise when it comes to data centres: "I've worked in lots of them and every data centre needs to be addressed on own merits. Some are old and ill-designed to be high-performance computing infrastructures." He goes on to suggest that the main power and cooling problems will be those faced by older centres, especially those where blades are installed but where the power infrastructure to feed such a high density of server is inadequate. He adds though, that: "Ours is six years old and there are no glaring oversights."

So there may be some force in Rasmussen's argument that a new lexicon is needed. However, it would appear that the theoretical argument about efficiency -- which is likely to be prompted by the rapidly rising cost of energy -- may well be lost in the messy real world where building twice the capacity you need now in order to cover all eventualities, not least because you might not get the money on the second time of asking.

It's pretty tough too for IT managers to predict with any certainty when a data centre's load will double, only that it probably will. And that if you have it in hand now, life gets a tad simpler than having to undergo another round of budget submissions -- which might also provoke questions about why the system wasn't designed to cope with expansion in the first place.

While APC has an axe to grind as a vendor of data centre infrastructure supplying power and cooling, its point could well be valid in many circumstances. Why not do the maths before the next submission?