IBM is spending $1 billion a year for five years on its Big Green Project. Half will go on new products and technologies with the other half on datacentre efficiency improvements.

The energy trap

There has been a huge growth in data centre energy use over recent years and IBM'rs say "Data centres are energy hogs." Energy costs are rising - from the third quarter of 2005 to the third quarter of 2006 there was a more than 30 percent rise in UK commercial electricity prices. IDC reckons that 70 cents will be spent on power and cooling for every dollar spent on IT hardware in 2010. By 2012 the power and cooling cost will be $1 for every HW dollar.

IBM estimates that 30-40 percent of the overall operating budget for a commercial IT operation is allocated to energy costs. Some data centres have reached their power supply limit and simply cannot install any more IT kit. Others are approaching that state.

Many of IBM's customers are building more and more data centres or expanding existing ones because their need for IT power is rising faster than IT components are shrinking.

Blade server technology is not helping from an energy use viewpoint. Five years ago a rack of servers might draw 1 - 5 kilowatts. Now a fully-loaded blade server rack could draw 20-30 kilowatts. This problem is not going away and IBM asserts that it needs fixing now. Furthermore the company reckons it can take out 40 - 50 percent of the power needed in an average data centre.

Of course, measuring this means that we have to know the current power draw of a data centre. We have to institute means to do this so that, once we start thinking green, we can see the results of actions we take.

Data centre power use pattern

The actual data centre power used by IT kit is only about 30 percent of the total data centre power draw. The chiller and cooling tower may use 33 percent. The UPS kit may use 18 -20 percent and the computer room air-conditioning 5 percent or so. It's not just IT equipment that's using the power for the data centre.

IBM can run an energy efficiency assessment for customers. This is not a simple task and needs 6 - 8 weeks for it to be completed. It gives users a benchmark for comparison and provides scope for improvement with around fifteen recommendations and their payback in terms of estimated annual electricity savings.

The Green Grid is working on data centre efficiency metrics with the basic measurement looking like an index of data centre efficiency (DCE) in which the total data centre power draw is divided by the actual energy use of direct IT equipment.

Three-step approach to data centre efficiency

Data centre energy use can be roughly split into cooling and supply availability boxes, and IT equipment power. Up until now IT equipment has been switched on all the time and data centre cooling has been just that, cool the data centre as a whole. The whole room has been treated as a single warm box which needs cooling.

In fact, it is not a single warm box but a collection of varying temperature hot spots. We can do three kinds of things to make our data centre power efficiency rise.

Firstly, instead of cooling the whole room we reduce the hot spot incidence by switching un-needed equipment off or replacing it with fewer more efficient boxes - the server and storage virtualisation approach.

Secondly, we cool the hot spots. John Kirby, an IBM IRS specialist in the UK, said: "The front of the server rack is the most critical part of the data centre." That's where the cool air needs to be.

There is no point in blowing cold air into already cool places. So the cooling has to become directed at specific hot spots and the airflow directed to them without obstructions; under-floor cable clutter that detracts from this has to be sorted out. Then the air warmed by the equipment must be vented from the data centre and cooled without heating up other parts of the data centre.

A rack design with alternate cold air and hot air aisles is one way of achieving this.

Thirdly, we increase the efficiency of devices using power so that less is wasted, primarily as heat. Transformers, UPS', server and desktop power supplies are all generally capable of operating at a higher power-efficiency. The consequence is that components cost more but their use reduces electricity bills, which pays for the components.

So we waste less power in non-IT and IT equipment, we switch off permanent or temporarily un-needed kit, and we cool the hot spots in a way that optimises air flows.

Part 2 continues here.