As vendors continue to pack more servers into a smaller footprint, keeping a lid on power requirements -- and keeping server racks cool -- has become a huge challenge. And the lowly AC power supply remains the toughest part of the problem to solve.
A typical power supply, which converts AC power into the various DC voltages required by individual server components, has an efficiency range of just 65 per cent to 85 per cent, vendors say. Just one 1KW power supply may generate 300W of waste heat, and today's blade servers can consume more than 14KW per rack.
"That's bad," says Scott Tease, product marketing manager for eServer BladeCenter at IBM. "One, I paid for that electricity, and two, I've released the heat into the environment and I have to pay to air-condition it."
To make matters worse, AC power-supply efficiency drops with the utilisation level. In servers with redundant power supplies, where the load is shared, best-case utilisation levels are below 50 per cent. As a result, power supplies in most servers tend to operate at the low end of the efficiency range, says Ken Baker, data centre infrastructure technologist at HP.
Some data centre managers have responded by using DC-based power distribution systems, eliminating the need for AC power supplies for server racks. IBM and HP both offer servers that can accept bulk DC power from a centralised, telecommunications-grade, -48V DC power distribution unit (PDU) and then step it down to the voltages required at the server level.
Rackable Systems' products support both bulk power and an option that moves the AC/DC converter away from individual servers to the top of each rack, where heat can be vented into the air-handling system. Rackable claims that its DC-powered servers reduce heat by up to 30 per cent. HP makes more modest claims of 15 per cent reduction, which can add up across many racks of servers, Baker says.
Data393 Holdings has made the leap to DC-powered servers. The company, which operates a collocation centre in Colorado, USA, uses a DC power distribution system inherited from a previous tenant to power 140 servers from Rackable. The power plant includes rectifiers that convert incoming AC power to DC and charge a bank of uninterruptible power supply batteries as well as its servers and network equipment.
Chris Leebelt, senior vice president at Data393, says the IT services provider chose DC-powered equipment because it needed to make the most of its available square footage and its ability to cool that space. While the power distribution system must still convert incoming power to DC, that conversion occurs outside the data centre.
DC-powered systems from Rackable cost about the same as traditional AC-powered servers while allowing more servers in each rack, according to Leebelt.
DC rectifiers also have a mean time between failures of 7 million hours -- 70 times longer than AC power supplies, says Geoffrey Noer, senior director of product marketing at Rackable.
"Some of our largest customers host almost exclusively in DC-related environments," says Baker. But he also points out that most are telecommunications companies and hosted service providers. "The number is very small in corporate data centres," he says.
The problem with DC
So why don't more enterprise data centres use DC PDUs?
Tease claims that the relationship between utilisation and efficiency issues is overstated, and IBM's BladeCentre power supply designs are 90 per cent efficient. In contrast, the converters required to step down DC power are 93 per cent efficient. "Unless the infrastructure is already in place, it just doesn't make sense," he says.
Baker says inertia and familiarity keep data centres on AC power, and the standards for AC are well established and understood. "It takes specialised talent to manage [DC] correctly," he says.
And because DC power has more resistance, the distribution system requires larger conductors. Neil Rasmussen, chief technical officer at APC, which makes UPSes and other data centre infrastructure, says that it adds to infrastructure costs. "DC wiring at these power levels is too expensive and complex, requiring specialised contractors and design," he says.
But Baker and Rackable's Noer say the costs overall are about the same.
Baker says the adoption of DC as an alternative power source could become a trend, particularly in new data centres where such infrastructure choices are being made. "We have customers that have chosen native DC from the ground up," he says. But Baker adds that the lion's share of enterprise data centres will continue to centre around AC power.
Meanwhile, IBM is focusing its power-saving efforts on areas such as the CPU, which accounts for 25 per cent of the power budget in a BladeCentre, Tease says. IBM offers a 2.8GHz Xeon DP processor that adds $200 (about £120) to the cost of a dual-processor blade but cuts power from 103W to 55W.
Noer claims that ultimately, the combination of low-voltage parts and DC power will have the biggest payoff: It can cut power requirements by half.
Rasmussen isn't convinced. "If you need to cut the load 15 per cent, just pull out 15 per cent of the servers and put them somewhere else," he says.
But for Data393, floor space is limited. DC power has enabled Leebelt to fill server racks that would otherwise run too hot for his air-handling systems. "[Vendors] don't tell you that you can't load a full rack of blades because the heat coming off the racks can be very significant," he says.
DC power by itself can't solve the problem of increasing power density in server racks. But the option has provided enough relief to convince Leebelt to migrate Data393's remaining 600 servers. "We're doing consolidation work to get out of AC hardware," he says.