Making data centres more power-efficient is becoming a mantra of modern IT practice. Whether it is to reduce electricity costs, to get around supply limitations, or to become friendlier to the environment there are a lot of IT directors, equipment suppliers, research and analyst houses and green lobby and pressure groups all singing off this same hymn sheet.

Let's assume you have a datacentre that's more than five years old, one with lots of servers, storage arrays, internal and external network boxes, a raised floor with power and data cabling underneath, air-conditioning with a chiller outside, fire suppression equipment and so forth.

Generally speaking it works but you are finding you can't get any more power delivered to it, and/or it's now full up of rack cabinets and stand-alone equipment, and/or its power supply is costing too much; there is too much equipment to power and keep cool. What can you do?

We can break the problem down into six different areas:

- The room structure and layout
- The power supply
- The cooling infrastructure
- Servers
- Storage
- Networking

Where can you direct your attention to get the most cost-effective return on any power reduction activities, bearing in mind that the data centre has to keep operating while you do this?

Datacentre infrastructure

Suppliers such as IBM have terrific datacentre renovation technology. It has great experience and can build new datacentres or renovate existing ones to embrace the latest ideas for power, cooling and layout so as to reduce power needs. But this is in the big bucks area and means wholesale datacentre remodelling. Obviously the datacentre equipment is out of action while this is being done and a major project is involved.

We can't really classify this as low-hanging fruit on the green tree though.

Hot aisle/cold aisle design is a way of improving cooling effectiveness by separating cold and warm air flows. It obviously means resiting equipment in racks so that fans blow the same way. You have to fit blanking plates to cover empty-rack spaces. It means that the equipment is out of action while you do this. Doing it on its own takes you so far but combining it with clearing under-floor obstructions and putting cabling along the tops of racks is better. The process then starts to look like a datacentre reconstruction though, with consultancy fees, engineering expense and a big project starts forming up. This is not a simple thing.

Power supply and cooling

Power supply companies are getting on board the green train and providing services and products to raise cooling and power-efficiency awareness. Emerson Network Power in the USA offers to survey your energy use. Its Liebert Data Center Assessment service analyses heat removal from sensitive equipment and evaluates electrical capacity and quality.

The assessment is done on-site and, using computational fluid dynamics, it creates a visual image showing datacentre air-flow, temperature, hot spots and zones that can negatively impact a computer's performance.

There are two components to the assessment: thermal and electrical. The thermal assessment takes air and temperature readings at specific points of a datacentre to identify the hot spots. Air-flow also is measured to pinpoint and draw raised floor air patterns, sub-floor obstructions and air flow through computer racks.

An electrical assessment will perform a single point of failure analysis to document weak spots. It will calculate the capacity of all switchgear from the main to mission-critical power distribution units, and measure the current being drawn through all UPS-rated capacity ratings. A harmonic analysis of the main breaker switchgear and load side of each UPS will be performed. The assessment also will determine kW and kVA loading on each UPS and compare it to the equipment's rating.

The company sells power supply equipment not cooling equipment, but Liebert, a company related to Emerson, does.

HP has a smart datacentre cooling (DSC) initiative which has a Liebert relationship component.

Its Thermal Zone Mapping enables customers to see a three-dimensional model of exactly how much and where datacentre air conditioners are cooling. As a result, they can arrange and manage air conditioning for optimal cooling, increased energy efficiency and lower costs.

John McCain, HP Services SVP and GM, says: “HP’s Thermal Assessment Services for the datacentre help customers deal with the major challenge of driving energy efficiency to lower operational costs.” He reckons customers can reduce datacentre cooling energy costs by up to 45 percent by using the Thermal Zone Mapping and subsequently DSC, which continuously adjusts datacentre air conditioning settings to direct where and when cooling is required. It does this using real-time air temperature measurements from sensors deployed on IT racks.

The two HP partners for DSC: Liebert Corporation and STULZ, are integrating their cooling control components with DSC, allowing customers to integrate the DSC with their existing air conditioning products.

This is an attractive idea in principle as you carry on using your existing kit in your existing datacentre but save energy cost through cooling IT kits more efficiently.

Whether it is a low-hanging fruit depends upon the energy cost-saving payback period for the thermal mapping service and the fitting of the rack sensors and DSC software.

What can you do without wholesale replacement, and hence interruption of, your datacentre equipment?

Direct current

It's become a truism that converting input alternating current (AC) to direct current (DC) in a datacentre is inefficient. Electricity is lost in the process, which you have paid for, and gets converted to heat. This contributes to the datacentre cooling problem, needing more electricity, which you have to pay for.

The problem is that the conversion from AC to DC happens several times. The AC goes to DC for powering a UPS, and it gets converted to DC for powering individual servers, storage and networking hardware.

It's said to be better, i.e, with less lost power, if you convert once at the input point to the datacentre, and then distribute DC to all the datacentre power-consuming hardware. However, this costs money and it requires the datacentre to be retrofitted with DC power, ripping out the existing AC power supply.

Once more, this is not a piece of low-hanging fruit to be easily picked. It's easier if you use hosted datacentre services and the hoster pays for the conversion to DC. That's what UK hoster Ultraspeed has done. But as a customer you don't gain from this in terms of lower hosting costs. Therefore why should you bother?

Look inside the datacentre

It seems from this overview of datacentre infrastructure, power and cooling alterations to lower power consumption, that there is little low-hanging fruit. Only the use of smarter cooling can make an existing datacentre save power supply cost but that may not be a swift payback; it depends upon the upfront cost, on-going licence/service costs and how long it takes to recoup that money.

But at least it can probably be done without renting a fork-lift truck and restricting significant amounts of datacentre service while you implement it.

It looks as the low-hanging fruit that may exist resides in the individual server, storage and network boxes or collections of them. With servers and other IT equipment the general idea is to use fewer boxes to do what you are currently doing, by using them much more effectively. We'll look at what you might be able to do here to save significant amounts of energy-cost relatively easily, in a subsequent feature.