We recently ran a feature in which we posited the view that using DC power within the data centre could reduce power usage and help reduce heat build-up and so increase reliability.
Some readers contacted the author, Computerworld's Robert L Mitchell, to say they agreed, while others said that switching to low-voltage DC would not provide power savings, and that the additional copper you'd need to take the extra current would be prohibitively expensive.
So we asked Neil Rasmussen, chief technology officer at American Power Conversion (APC), which makes data centre power and other infrastructure products, for his informed view. Since Mitchell's original article was written from a US point of view, where mains voltage is of course 110V, we asked Rasmussen to provide a specifically European perspective.
Here's what he said:
DC distribution is widely used in telecommunications environments, such as local telephone switch offices, its use pre-dating the modern UPS system. As telecoms and data centre environments have converged, environments with mixed types of equipment are common. In many data centres, there is a small DC-powered telecoms room adjacent to the data centre, with DC-powered devices such as fibre interfaces.
The existence of the DC-powered telecoms room is primarily an evolution as a result of the absorption of IT devices into historic telecommunication facilities, as some telecommunications companies adopted a DC standard and built entire data centres around a DC architecture. You can find a number of devices including routers and servers that are orderable in both AC and DC versions.
Occasionally, some company in the industry will champion the DC approach to power distribution, with the claim that improved efficiency or reliability will result. In fact, APC also offered such solutions until recently. However, the advent of ultra-high density computing makes DC distribution impractical to the point of absurdity.
Here is the description of the problem, updated for European voltage and wire standards.
The size of a bussbar for a DC power distribution system is controlled by two parameters: how much loss the system can accept and how much voltage drop the system can accept.
If we assume a path length back to a power source of 25 metres, then the total round trip circuit length for a rack is 50 metres. Let's compare 230V distribution to 48V distribution for a 20kW rack.
230V AC6 sq.mm wire, 3-phase, 29A per phase, 25 metre length, 0.075 ohm each wire, 189W lost in the wiring, approx one per cent voltage drop to the load, 4Kg of copper wire per rack, 2cm approximate diameter of wire bundle to the rack.
48V DC417A, 25 metre length, 189W lost, 0.0005 ohm each wire, 830 sq.mm wire for each path. 400Kg of copper wire per rack.
For the same electrical efficiency, the cross-sectional area of the wire (and therefore the weight) for the 48V system is 100 times the area and weight of wire for the 230V system. You can see that the weight of copper for a data centre of 100 racks would be on the order of 40,000 kg which is hundreds of thousands of Euros (or GBP) of copper.
Also, there would be large and complex terminations of the distribution wiring to the battery system. Note that this is a substantial weight load in the data centre. Furthermore, if the power level was 40KW per rack and not 20KW, this problem gets significantly worse.
Note that this discussion focuses around 48V DC as the distribution voltage. This is the only current standard for DC distribution. If the voltage is higher some of these problems are alleviated, but this opens up a whole new discussion on the safety of higher voltage DC.
If someone were to actually build such a DC-powered data centre at a density of 10KW per rack or higher, they would probably try to use less wire by giving up efficiency. However, this defeats the whole claim that such data centres would have higher efficiency.
This is the key reason why DC distribution for data centres is impractical and very rarely used. Note that DC distribution is used in telco facilities where the power density is on the order of 1-2KW per rack. In these cases the weight load of the DC wiring is much less.
Please note that many servers, such as blade servers, use internal DC rack power distribution. In this case the actual distribution to the rack is AC and the power is converted to DC within the rack. Such designs are sometimes called DC designs, and causes discussions regarding DC distribution to be confusing. The comments above do not apply to such DC at-the-rack designs. The APC position that DC distribution is impractical only applies to power distribution to (and outside of) the rack. It is important to make this distinction.
Note that if a user is interested in improving data centre efficiency, the opportunity to save energy by improving cooling system design is much greater than the opportunity to save by improving the power system design. However, there remains a substantial opportunity to save energy in data centre power systems. The primary opportunity is in right-sizing power systems and using UPS systems specifically designed to have high efficiency at light loads.