This is part one of a two-part article, the second of which will be published tomorrow.
When Tom Roberts oversaw the construction of a 9,000-square-foot data centre for Trinity Health, a group of 44 hospitals, he thought the infrastructure would last four or five years. A little more than three years later, he's looking at adding another 3,000 square feet and re-engineering some of the existing space to accommodate rapidly changing power and cooling needs.

Power struggle

As in many organisations, Trinity Health's data centre faces pressures from two directions. Growth in the business and a trend toward automating more processes as server prices continue to drop have stoked the demand for more servers. Roberts says that as those servers continue to get smaller and more powerful, he can get up to eight times more units in the same space. But the power density of those servers has exploded.

"The equipment just keeps chewing up more and more watts per square foot," says Roberts, director of data centre services at Trinity. That has resulted in challenges meeting power-delivery and cooling needs and has forced some retrofitting.

"It's not just a build-out of space but of the electrical and the HVAC systems that need to cool these very dense pieces of equipment that we can now put in a single rack," Roberts says.

Power-related issues are already a top concern in the largest data centres, says Jerry Murphy, an analyst at Robert Frances Group. In a study his firm conducted in January, 41 per cent of the 50 Fortune 500 IT executives it surveyed identified power and cooling as problems in their data centres, he says.

Murphy also recently visited CIOs at six of the nation's largest financial services companies. "Every single one of them said their No. 1 problem was power," he says. While only the largest data centres experienced significant problems in 2005, Murphy expects more data centres to feel the pain this year as administrators continue to replenish older equipment with newer units that have higher power densities.

In large, multi-megawatt data centres, where annual power bills can top £500,000, more efficient designs can significantly cut costs. In many data centres, electricity now represents as much as half of operating expenses, says Peter Gross, CEO of EYP Mission Critical Facilities, New York-based data centre designer. Increased efficiency has another benefit: in new designs, more-efficient equipment reduces capital costs by allowing the data centre to lower its investment in cooling capacity.

Pain points

Trinity's data centre isn't enormous, but Roberts is already feeling the pain. His data centre houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300kW uninterruptible power supplies.

"We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data centre has surpassed 250 watts per square foot.

At Industrial Light & Magic's brand-new 13,500-square-foot data centre in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data centre has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200W per square foot.

Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built." Each rack of blade servers consumes between 18kW and 19kW when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

This is part one of a two-part article, the second of which will be published tomorrow.