Energy efficiency and server virtualisation are two of the key issues in the data centre today, but if they are not carefully managed they can introduce new problems - and they can as easily come into conflict as complement each other.
That's according to Dr Joseph Reger, the chief technology officer of Fujitsu Siemens Computers, a company with fingers in a good many IT pies and also a notable interest in greener technology, thanks in part to its German heritage.
"As soon as electricity costs 15 or 16 cents a kilowatt, people will take it into account," says Reger. He adds that small servers have become so cheap and popular that they are now the biggest part of energy consumption in data centres - simply because there are so many more of them than there are large servers.
The problem with that is that cheap servers tend to have cheap power supplies that can be as little as 30 or 40 percent efficient. Reger says the answer is to spend a bit more to get machines with switched power supplies.
Switching to efficient power
"Europe now mandates switched power supplies, as do parts of the US - it's not much extra cost," he says. "The energy loss is proportional to the frequency so you transform it to a higher frequency before transforming the voltage, and it can go from 50 percent efficiency to 80 percent.
"Currently, an energy-conscious server needs 50 to 60 Watts, it was 120W so that's good," he continues, noting that once you add it all together, "The energy saving after three or four years is equivalent to the purchase price of the server. Google is now designing its own power supplies, because that is the latest issue."
Another route - and one which Fujitsu Siemens is pushing with its 'data centre in a box' strategy is to use bladeservers, preferably with server virtualisation on top to multiply up to the number of servers you actually need.
"Blades are becoming popular and they share a power supply, so we can make them more efficient," Reger says. "Also automation software can switch off unneeded equipment – that works equally well for Windows or Linux."
Energy efficiency also needs to influence the way systems are deployed, he says - for example, it completely turns around the conventional approach to load balancing.
"Servers don't currently use the likes of SpeedStep," he adds. "But now we have thousands of servers in the data centre, with an average load of 10 percent, and simple physics tells you that the lower the utilisation is, the worse the efficiency is.
Rebalancing the loads
"Load balancing algorithms used to even the load over the servers, but that's the worst for power efficiency. It's better to fill three, say, and use power management to put the rest in standby."
But energy-efficient hardware and software is not enough on its own - either to cut the company's electricity bill or save the planet. "If you don't do data centre cooling and design well, for every 100W of power used, you have to pump in 200W or maybe even 300W," he says.
"And the bad news is there's a lot more devices outside the data centre than inside. In Germany, six percent of power is wasted on devices in standby mode – it's similar in the US. The situation has got worse with PVRs (personal video recorders) and so on, because they have to be on to get your EPG downloads and software updates. It is not viable to ban standby – there is talk of that from Brussels, but it's not feasible."
He notes that the introduction of virtualisation can not only improve efficiency - for example, by loading up a physical server with virtual servers so it runs as efficiently as possible - but it can also decrease it.
"A problem that's not been written about yet is that software deals with hardware in an energy-efficient manner, but the whole point of virtualisation is to stop the software talking directly to the hardware, so your power management doesn't work any more," he explains.
"One option is to put energy management into the virtualisation layer – there are companies doing that but it means that the thin layer of virtualisation software is getting thicker.
"So we are taking functions from the operating system into the virtualisation layer. Will the virtualisation layer become the operating system of the future? You can now do things directly on the hypervisor without an operating system - run Java, for example."
For hypervisors, thin is good
That is introducing new risks though, he adds. "Hypervisors are not immune to malware any more - we have already seen the first hypervisor virus. The most disconcerting thing is you cannot discover it. But the days of the hypervisor being hidden are over, and we will have to deal with that."
He is also sceptical of Microsoft's plans to build a hypervisor - the manager for virtual machines - into Windows Server, code-named Viridian.
"It doesn't surprise anyone that Microsoft is looking at putting virtualisation into the operating system," he says. "But systems manufacturers want to build systems that can run anything - Windows, Linux, maybe even a mainframe operating system.
"On our bladeservers, the O/S is provisioned with the application, for example to repurpose a server overnight. I would prefer not to have to change the hypervisor as well.
"My perspective is it would be good for the industry if we kept to clear layers - hardware, a virtualisation layer, and then anything on top of that.
"Then we would have open standards and open interfaces, and can assemble a system as we do today. That doesn't work with the Microsoft proposition. It didn't work with VMware either, but that has changed position and now VMware is also for the independence of the layers, which is great."
Find your next job with techworld jobs