Grabbing the opportunity to talk to Kwasi Asare, IBM's CoolBlue product marketing manager at the Power and Cooling Data Centre Summit in London, we asked him for an update on IBM's thinking on data centre energy usage.

Q: How would you describe the data centre energy problem in its broadest perspective? A: The problem is about energy management, and when it comes to x86 platforms, it's to do with system placement - all because there just isn't the same legacy of system management as in mainframes and the big Unix boxes.

In spite of chipsets that help manage energy, x86 systems consume a lot. Essentially, you have systems with high performance in a small form factor. So chips get hot.

If you want the perspective of one bank that I speak to the data centre manager says: 'I have three buckets I can put my money in -- power, servers and cooling. And I'd rather spend it with you. So for us, inefficiency represents an opportunity.

Q: What is CoolBlue? A: It's an assembly of cooling building blocks designed to help customers maximise their ROI and lower TCO.

In the x86 world, that aim is to incorporate everyone as soon as possible. It includes hardware, services, and solutions. For example, PowerConfigurator, allows IT managers to set up and design their own systems. They can get a better handle on power consumption by taking advantage of our R&D, and manage that power consumption. Meanwhile PowerExecutive collects controls and caps energy, helping you understand where the power is being used.

Q: What are the biggest problems data centre managers face? A: The biggest problems are in budgeting and planning.

Part of that problem is that the last person to find out about a server acquisition is the facilities manager. Yet planning and budgeting for energy, and the design for cooling and other infrastructure needs to be done in advance in preparation for a long lifecycle. Facilities needs a roadmap showing bottlenecks and power consumption. The services arm of IBM's CoolBlue can help them.

Q: Where do the biggest problems lie? A: The main culprit is the CPU because the footprint is so small. It means that systems built on new chips from AMD and Intel are running at 400W, even at idle states. The variable costs of running x86 systems are outstripping the cost of their acquisition. Other culprits include the multiple conversions from AC to DC, a lot of which comes from energy-inefficient PSUs. Other problem areas include network switches and storage.

Q: Naturally, you have a solution? A: We try to manage everything that creates heat inside the rack - our cooling systems can manage 55 per cent of the heat coming out of the rack, at power levels up to 30kW.

Ideally, water cooling is the answer - it's 3,500 more efficient than air, taking advantage of our mainframe heritage. The System Z guys use it a lot.

Q: What can data centre managers themselves do? A: One data centre came to me and said that 400kW of space would cost $4m to install. We said the answer is PowerExecutive, which allowed them to manage the costs.

Q: How successful are facilities managers at talking to IT and data centre managers? Do they share common metrics? A: They're not talking enough. Our view is that PowerExecutive allows them to share metrics, and gives them the same set of controls across their areas of expertise.

Q: Are enterprises changing their structures to ameliorate this situation? A: Companies like Google are doing so - the facilities guys are becoming more involved with the business decisions. And one UK financial services company's facilities managers are using PowerExecutive to show the executives and accountants how much their systems are costing.

Q: How long can we assume unlimited availability of power from the grid? A: One power company in California is encouraging customers to save energy by cutting bills by $1,000 a month if they manage to reduce usage.

Generally, we need to be more sensitive to energy use and availability. But instead, customers are not cutting usage. Instead, they're using the same power envelope and squeezing more systems into it, to get 100pc utilisation out of that allocation and those systems.

So we're getting more intelligent about power management but I see no sign of a lowing of the need for more power. The problem is getting worse.

Q: Is a move to DC within the data centre the answer? A: Some people are doing that - one [IBM] BladeCentre telco is doing that. There's a lot of inefficiency at the power supply level - some PSUs are as much as 30 per cent inefficient.

Proper management and improved systems layout is the answer. We've started to take on that challenge but we'll need energy companies to help - they could provide DC current to the data centre for instance.

Q: Is that likely to happen? A: [No answer was provided].

What are the key hurdles in future? A: There is some FUD out there with respect to measuring performance per watt - some vendors are bringing forward spurious metrics such as Sun's SWaP. We need benchmarks that reflect true data centre layout, not strategically placed systems to achieve a specific benchmark result. What customers need is an accurate benchmark to get the best result.

Q: What else can data centre managers do? A: They need to think about the triple bottom line: shareholder value, environment and social responsibility. It's important for us to go forward as an industry -- all of us, AMD, Dell, HP and so on -- to be open and develop standards. We want to help develop metrics on performance/watt with the [US] Environmental Protection Agency, and the Green Grid. This allows positive decisions to be made at a macro level.

After all, customers don't standardise around exclusively IBM solutions so why should we? A: We need to be open and collaborate.