The European Commission is calling for faster and more decisive action from the IT industry on the issue of raising the temperature in the data centre to enable more free air cooling.
Speaking at The Green Grid EMEA Forum in Brussels, Paolo Bertoldi, Directorate-General of the European Commission's Joint Research Center (JRC), listed some of the successes of the EU Code of Conduct for Data Centres, such as more air flow containment in new build facilities and greater use of power measurement and metering.
However, he said that there are still a lot of data centres with a very low cooling set point (22-23ºC), and very narrow humidity control, resulting in an ongoing reliance on power-hungry mechanical chillers.
“We at the best practice group listen to all the different opinions and technical views and try to reach a compromise on the process of standards such as ASHRAE and ETSI in Europe,” said Bertoldi.
“We want to increase free cooling, and we have a lot of data showing that some data centres are running higher temperatures, and their operations are reliable. That means 26-27ºC, some days up to 30ºC. So what we need is an equipment guarantee up to 30ºC or more for a short period of time.”
In a recent report, “Data Centre Efficiency & IT Equipment Reliability”, The Green Grid stated that the current perception of data centre equipment’s tolerance to heat and humidity is based on archaic practices dating back to the 1950s, resulting in an enormous waste of money and carbon.
Periods of high heat and humidity can be compensated by periods of more favourable environmental conditions, where water- and air-side economisers can be used for cooling. This allows data centres to reduce reliance on mechanical chillers without any detriment to overall failure rates.
Intel has been advising its customers to increase the temperature in their data centres for years, and now some of the data centre equipment manufacturers, such as Dell, are also urging customers to cut down on their use of mechanical chillers and adopt more eco-friendly forms of cooling.
However, Steve Strutt, IBM's CTO of Cloud Computing for the UK and Ireland and member of The Green Grid's EMEA Technical Work Group, said that such a fundamental change cannot happen overnight.
“It's not just about the change in equipment supporting a different environmental range. It's actually about users adopting it, being comfortable to adopt it, understanding what the implications are going to be – because there are implications,” said Strutt.
“Trying to communicate that to bureaucrats in the EU is not easy. Their job is to negotiate agreement between different parties. When different parties don't want to agree, it doesn't work very well.”
In 2011, ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) added two new data centre classifications to their guidelines – A3 and A4 – which expand the allowable temperature range to 40ºC and 45ºC respectively.
Unlike previous ASHRAE guidelines, these were not based on a consensus in the industry, but were intended as an aspirational target, so that as new equipment came out, the warranties would support higher operating temperatures.
“Now that it is effectively in the Code of Conduct that A3 is preferred, then over time buying patterns and refresh will ensure that most of the kit out there is to this standard, and we should then be able to widen the operating range,” said Strutt.
“However, what Paolo hasn't realised is that it takes several years for that to happen. So he's not seeing any change in operating temperature, because you've got to have all your kit more or less working to these specifications to be able to adopt it throughout the data centre.”
Strutt added that there is currently no clarity within the industry around warranty and support across A3 and A4. Operating at higher temperatures generally means more failures, which in turn means more warranty costs. But should these costs be born by the supplier or passed on to the end users?
In fact, many warranties already cover higher operating temperatures, according to Strutt, but equipment vendors do not tend to advertise this, so customers have to proactively check with their suppliers. There is also no way to prove what temperature the equipment was operating at when it failed.
Impact on failure rates
In most of Europe, however, the likelihood of outdoor temperatures regularly exceeding recommended operating temperatures is very low.
“London is below 20ºC for about 93% of the year. It is only about 2% of the time that the temperature is over 25ºC, and the amount of time that it's over 30ºC is an infinitesimal number of hours,” said Strutt.
Moreover, while raising the temperature in the data centre does have an impact on the failure rate of equipment, the impact is less than some people may think.
“Say your data centre has 1000 servers, and you get 4 failures a year. It's only when you get to operating continuously at 32ºC, 365 days a year, that you get 1.5 times the number of failures, so you'll have six servers fail a year instead of four.”
This may be significant in data centres running enterprise-class applications that assume 99.999 availability, but if the software is designed to cope with random failure using cheaper hardware – as it is in many of the major cloud data centres – then it works very well.
Strutt concluded that it may take several years for these changes to roll through the whole industry, and some organisations may only be comfortable with going up a degree or two at this point in time, but what The Green Grid is outlining is possible.
The most important thing is that the protocols and best practices that develop are appropriate for the industry, and that the business case for raising the temperature in the data centre is sound.