Blade servers are gradually taking over data centres as their power increases and they provide higher densities. However, they generate their own problems, mainly related to cooling and power draw.

IBM is the blade server market leader, so we spoke to Big Blue's BladeCenter strategy manager Tim Dougherty, who was in the UK yesterday. He had just come from talking to customers in the City of London so we asked him about their top of mind issues.

Q: What do your customers worry most about?

A: Power and cooling. In the last 10-12 years, the amount of CPU power they're consuming has increased by a factor of six times, and it has real consequences. The cost of buying a server is the same as it was then but the cost of running it is much higher. One customer said that in last four months, electricity rates went up 25 per cent. As the cooling cost is half the CPU running costs, add the cost of floor space and, in three years, that server has costs you twice as much to run as to buy.

Q: What solutions do they have available? What about distributed data centres, consolidation, virtualisation and so on?

A: In terms of solutions, they can distribute the data centre, they can deploy best practices such as creating hot and cold aisles. They can also use an IBM solution, Cool Blue, which is a water-based cooling system consisting of a door on the back of a rack, so that air is cooled on its way out. It takes 55 per cent of the heat out at source.

You'd be surprised at the percentage of people doing consolidation and virtualisation. The reality is that we've a ton of customers who are virtualising their blades. Because the average utilisation of an x86 CPU is pretty low – the typical Citrix, Exchange, Domino server runs at very utilisation, virtualisation helps solve that problem.

Virtualisation everything makes better use of the asset, and is much better at managing workloads. And it's a major plus because when something fails, and it will, you can move the workload somewhere else.

What's more customers can automate that. The process is people-intensive right now – such as hot swap disks and so on – but we say there's a better way and that means one admin guy can handle a lot more servers.

Q: How are data centre managers managing the relationship with the rest of the organisation – for example with the facilities managers, with whom they have to negotiate over issues such as power?

A: Historically, there's a lot of times that the data centre guy says he doesn't have enough cooling, power, airflow and so on. So yes, there's a conflict that didn't exist before. The problem is not so much the language they use, as some argue. They're just finding out that they're not independent from each other any more, they're finding they need to talk to each other more.

The problem goes back to the six-fold increase in CPU power and the airflow that's being consumed, plus the consolidation that's been going on for four or five years. Decentralisation is over, organisations are recentralising their servers. They're taking back servers that used to run out in the departments and pulling them back to the data centre. In the past, department groups could buy a $10k server, they didn't need IT's sign-off, and they didn't need an IT guy to run it.

But when the economic crunch came, cutbacks meant the local IT support guy was pulled back to his proper job so the hardware went back to IT.

Q: Do you still see rooms full of x86 servers that have been there for ages and no-one knows what they do?

A: Yes, but not as many as you did six yrs ago. The good news is that there are tools available that can find server instances – I've heard IT guys say "wow, I didn't know that was there!". But it doesn't happen as often as before.

Find your next job with techworld jobs