Low-end server technology is racing ahead of the ability of many data centres to keep the increasingly dense and fast systems cool, a problem that's stopping some IT managers from using the new machines. Users may have to invest in new cooling, monitoring and power equipment, or even retrofit their data centres or build new ones, to accommodate the servers.

US data centre managers who gathered last week for a conference said server and processor vendors need to shift from building denser systems to making ones that use less power. They also need to set up development partnerships with vendors of cooling and power products, conference attendees said.

The server makers have "set their sights on making things faster and smaller, and that's all well and nice, but both those combinations make things hotter -- that's all there is to it," said Thomas Roberts, director of data centre operations at Trinity Health, which operates hospitals and outpatient facilities.

Roberts said he would like to install a row of blade servers at a disaster recovery site, but his 14-inch raised floors, which are stuffed with cabling, can't produce enough air pressure to cool the servers. "Not everyone can build a data centre to meet their cooling needs," he said.

The importance of cooling was evident to conference attendees, said Steve Helms, an IT specialist involved in data centre design at biotech firm Amgen. He noted that two of the biggest displays were for power and cooling equipment suppliers Liebert and American Power Conversion (APC).

The presence of those vendors and others focused on facility issues is "telling me that there is a very serious issue out there, and a lot of people are already experiencing it, and the rest of us are going to get it soon," Helms said. "Blade servers are probably going to be in everybody's data centre sooner or later."

Cooling "is the issue," said Bruce Edwards, president of CCG Facilities Integration, a data centre consultancy in Baltimore. Improper cooling can shorten a server's life or cause failure, he said.

"We are now presented with equipment that can use more power then ever before in a smaller footprint," he said. "And the amount of power delivered and required far exceeds our current techniques for cooling data centres."

Blade server racks come in a variety of sizes with different levels of power consumption. Large rack systems with 98 blades spread over seven chassis can consume as much as 24 kilowatts of power. That exceeds the capacity of the latest cooling systems, which can handle 20 kilowatts, manufacturers say.

Patrick Luma, data centre manager at Stanford University, said he would like to see server vendors "concentrating less on condensing the size of the machines, such as having gone to 1U, and concentrating more on reducing the heat within the machine." A 1U device is 1.75-in. high.

Vendors said they're working to make systems more efficient but face a catch-22, with some users complaining about heat and others calling for more speed, said Darrel Ward, manager of server technology and product planning at Dell. Dell has made design trade-offs for its 1U and blade servers, taking a moderate approach in order to control heat, he said.

Adding cooling technologies to the server itself will come at a cost, Ward added. "If you introduce something besides traditional fans, heat sinks and airflows to a commodity server, that $2,000 server is not going to be $2,000 anymore," he said.

Server Economics
Server vendors buy industry-standard components but try to distinguish themselves through efficient design, said Scott Tease, worldwide product marketing manager for IBM's eServer BladeCentre line. For example, IBM built in cooling technologies that direct airflow and reduced the number of heat-generating fan motors, Tease said.

The economics are complex. Very dense systems may require more cooling but take up less real estate. However, data centre consolidations are increasing the sizes of IT facilities. Fast processors use more power to deliver greater computational capabilities at less wattage per MIPS, but this Moore's Law advantage is offset by users who install more of these increasingly powerful commodity servers in their data centres.