In the past, companies could feel comfortable installing more air conditioning units in data centres as their cooling needs grew. But as servers become faster and more compact, the issues of heat and power aren't answered by adding more air conditioning. With server vendors rolling out systems that are increasingly powerful - and increasingly small - IT managers, those who aren't sceptical, need to consider the heat output and power demands of new configurations that pack more processing power into less space.

Pack them in, keep them cool
Consider Virginia Polytechnic and State University in the US, which recently deployed a supercomputing cluster of about 1,100 Apple G5-based systems. To adequately cool the cluster, the university's engineering firm recommended the school use traditional air conditioning units and spread the servers across a 10,000-square-foot area, the entire floor space for the university's main data centre.

"[Spreading the cluster across] a 10,000-square-foot [about 1000 sq. metres] design really wasn't an option," said Kevin Shinpaugh, director of research and cluster computing at Virginia Tech and associate director of the Virginia Tech Terascale Computing Facility, which manages the Terascale cluster. He couldn't allocate the entire data centre for the cluster because of other systems the university had installed.

Shinpaugh looked for other options for cooling the servers and finally settled on precision cooling systems from Liebert that suck hot air out of racks and use rack- or ceiling-mounted air conditioning units. "We had about 3,000 square feet [about 280 sq. metres] of available space [for the cluster] and the [Liebert] extreme cooling option allowed us to do what we needed to do," he said. "Our only other option would have been to build another building."

Shinpaugh added the school spent about $2 million (about £1.09 million) for the cooling devices and adding power and that now the data centre has excess power and cooling capacity, and will be able to handle additional systems over the next few years. "The $2 million investment allows us to better use the space we already have," he said. "Once we get over the upfront costs, adding to the cluster or building new clusters will be easy."

Mark Nelson, project manager at Applied Materials said he too designed his data centre to accommodate increasingly dense configurations. Today, the data centre is designed to handle about 75 watts per square foot (about 800 watts per sq. metre) of power, but only uses about 39 per cent of that capacity. "We anticipate as we put in more equipment and as equipment is replaced with newer technology that our wattage per square foot is going to start creeping up. We'll start approaching 75 per cent," he said. Nelson said he runs a redundant power system so there is immediate failover in case of problems and has an extra air conditioner on hand in case heat output spikes above his worst-case scenario.

According to the Uptime Institute, a consortium of companies focused on reducing downtime in data centres, the average heat density output in today's data centres is about 28 watts per square foot (300 watts per sq. metre).

"While that number has been increasing for the past few years, it's still nowhere near the number you'd get if you used blade servers," said Kenneth Brill, executive director of the Uptime Institute. "When you go to blade servers, you could reach 400 watts a square foot [4,300 watts per sq. metre] in a large deployment." Brill said some blade users have reported as much as 14 kilowatts of heat output per rack, about the same amount of heat given off by two household electric ovens.

Cees de Kuijer, infrastructure manager at consulting and outsourcing firm Capgemini, said he'll wait for blade server technology to evolve before bringing the compact slices of computing power into his data centre. "Blade servers present several problems - one of them is heating, the other is powering," de Kuijer said. "We basically have a ban on blade servers at this moment on the procurement side."

Practical advice
A research note that Gartner published late last year cautioned enterprise users to think carefully about deploying new technologies such as blade servers and increasingly dense rack-mounted systems. "Without careful planning and coordination between the data centre facilities staff and the server procurement staff, data centres will not be able to increase power or cooling in line with increases in server deployments," Gartner analysts wrote. "We believe that, through year-end 2008, heat and cooling requirements for servers will prevent 90 per cent of enterprise data centres from achieving the maximum theoretical server density."

That is not to say that businesses can't enjoy the benefits of getting the processing power they need in fewer square feet of often-costly data centre space. Gartner said most enterprise server vendors offer assessment services to help customers determine their power and cooling limits.

Companies such as Liebert and APC provide AC and DC power products, and precision cooling devices aimed at cooling denser systems. Hardware vendors and chip manufacturers also are focused on the issue, with lower-power chips from Intel and AMD. Intel plans to add power-management technology to its Itanium and Xeon processors in the next year or so, letting users set power thresholds and creating CPUs that can be cycled on and off depending on needs.

Nevertheless, the need to monitor power and heat concerns within the data centre continues to grow, especially with companies rolling out distributed computing architectures such as clusters and grids.

Toshiba America opted to use specialised Intel-based servers from Rackable Systems, along with the company's distributed DC power technology, to deploy server clusters to run electronic design-automation applications in its data centres. Richard Tobias, vice president of Toshiba's ASIC and Foundry business unit, said that because the DC power supply is smaller than traditional AC units, it puts out less heat and is therefore less susceptible to overheating and outage. In addition, the rackable servers are designed for high-density deployments and sit back-to-back in racks with the heat forced out of the top of the unit.

"The main factors we were looking at were the cost per rack to build something out and the kind of compute density you can get," Tobias said. "The power savings [with DC conversion] meant that you can put more servers on the rack and do more with the rack."

The bottom line is that IT managers need to work closely with their facilities teams to understand exactly how increased power and cooling requirements will affect server deployments. In many cases, data centres have sufficient airflow and cooling, it's just being directed inadequately, Brill said. "Just by making some relatively minor changes people can recover enough capacity to get by for a couple of years," he added.

How to keep it cool
Having dense servers means more heat and more power consumption in smaller spaces. As corporations deploy these new systems, they must work closely with their facilities teams to ensure their data centres can stand the heat. A few tips:


  • First things first: Do a realistic analysis of cooling capabilities and power availability before planning new deployments.
  • Make room: Ensure that there is enough room in the back of racks so that cables don’t obstruct airflow.
  • Move up: Consider technology that moves hot air out of the top of racks and cools from the top down.
  • AD/DC: Evaluate DC power alternatives, which are typically less expensive.
  • The dotted line: Don’t overdo it with perforated tiles; focus them in cool aisles.
  • Plug holes: Be sure cable cut-outs behind or underneath racks don’t let too much air escape, reducing airflow pressure.
  • Spread things out: In some cases, having more open space to ease power and cooling requirements makes more sense than pushing for maximum density.
  • Keep your eye on advances: Vendors are introducing servers with multi-threading and multi-core technologies, which provide performance improvements without big jumps in heat and power.