Making data centres more power-efficient is becoming a mantra of modern IT practice. Whether it is to reduce electricity costs, to get around supply limitations, or to become friendlier to the environment there are a lot of IT directors, equipment suppliers, research and analyst houses and green lobby and pressure groups all singing off this same hymn sheet.

Let's assume you have a datacentre that's more than five years old, one with lots of servers, storage arrays, internal and external network boxes, a raised floor with power and data cabling underneath, air-conditioning with a chiller outside, fire suppression equipment and so forth.

Generally speaking it works but you are finding you can't get any more power delivered to it, and/or it's now full up of rack cabinets and stand-alone equipment, and/or its power supply is costing too much; there is too much equipment to power and keep cool. What can you do?

We can break the problem down into six different areas:

- The room structure and layout
- The power supply
- The cooling infrastructure
- Servers
- Storage
- Networking

Where can you direct your attention to get the most cost-effective return on any power reduction activities, bearing in mind that the data centre has to keep operating while you do this?

Datacentre infrastructure

Suppliers such as IBM have terrific datacentre renovation technology. It has great experience and can build new datacentres or renovate existing ones to embrace the latest ideas for power, cooling and layout so as to reduce power needs. But this is in the big bucks area and means wholesale datacentre remodelling. Obviously the datacentre equipment is out of action while this is being done and a major project is involved.

We can't really classify this as low-hanging fruit on the green tree though.

Hot aisle/cold aisle design is a way of improving cooling effectiveness by separating cold and warm air flows. It obviously means resiting equipment in racks so that fans blow the same way. You have to fit blanking plates to cover empty-rack spaces. It means that the equipment is out of action while you do this. Doing it on its own takes you so far but combining it with clearing under-floor obstructions and putting cabling along the tops of racks is better. The process then starts to look like a datacentre reconstruction though, with consultancy fees, engineering expense and a big project starts forming up. This is not a simple thing.

Power supply and cooling

Power supply companies are getting on board the green train and providing services and products to raise cooling and power-efficiency awareness. Emerson Network Power in the USA offers to survey your energy use. Its Liebert Data Center Assessment service analyses heat removal from sensitive equipment and evaluates electrical capacity and quality.

The assessment is done on-site and, using computational fluid dynamics, it creates a visual image showing datacentre air-flow, temperature, hot spots and zones that can negatively impact a computer's performance.

There are two components to the assessment: thermal and electrical. The thermal assessment takes air and temperature readings at specific points of a datacentre to identify the hot spots. Air-flow also is measured to pinpoint and draw raised floor air patterns, sub-floor obstructions and air flow through computer racks.

An electrical assessment will perform a single point of failure analysis to document weak spots. It will calculate the capacity of all switchgear from the main to mission-critical power distribution units, and measure the current being drawn through all UPS-rated capacity ratings. A harmonic analysis of the main breaker switchgear and load side of each UPS will be performed. The assessment also will determine kW and kVA loading on each UPS and compare it to the equipment's rating.

The company sells power supply equipment not cooling equipment, but Liebert, a company related to Emerson, does.