When the University of Illinois' National Center for Supercomputing Applications set out to build a machine with more than 200,000 server cores, the key wasn't simply shelling out cash for newer, faster silicon chips. The trick was harnessing the power of a substance that comes right out of your kitchen sink: water.
Using water to cool servers isn't a new idea, but it is gaining new converts at a time when fears of global warming and rising energy costs are making datacentre operators and server vendors search for ways to increase efficiency.
To Rob Pennington, deputy director of the NCSA, water cooling offers one huge advantage: power density.
The NCSA's planned Blue Waters petascale computing machine will fit more than 200,000 cores in a space that's about twice the size of a current NCSA machine that has 9,600 cores, according to Pennington. "Water cooling makes it possible," Pennington says. "If we had to do air cooling, we'd be limited by how much air can be blown up through the floor."
Blue Waters will be operational in 2011 and will likely use servers based on IBM's future Power7 chips.
Water cooling is inherently more efficient than air conditioning, Pennington says. That efficiency is being exploited to greater effect with today's multicore processors and multisocket motherboards. When a motherboard had one socket a decade or so ago, the advantage of water cooling didn't mean as much as it does today, when you're typically trying to cool four sockets on the motherboard, he says.
NEC, using Intel Pentium processors, began selling a water-cooled server at the end of 2005. IBM is just returning to water cooling servers after not using the technique since 1995. Big Blue abandoned water cooling after shipping its last bipolar mainframe with CMOS (complementary metal-oxide-semiconductor) technology, according to Ed Seminaro, chief system architect for IBM's Power Systems.
"We actually went from a product that used almost 200 kilowatts of power down to a product that could basically satisfy the same function with about 5,000 watts," Seminaro says. "That's why we didn't need water cooling anymore. There was far less power required and far less heat density."
Times have changed. Last month, IBM added what it calls a hydro-cluster water cooling system to its System p5 575 supercomputer. As the number of transistors on a chip increased over the past decade, IBM wasn't always able to keep power usage steady. So it turned to water cooling with an innovative design that brings water almost right up to the chip.
Why is water so efficient? Because heat from servers eventually gets transferred to water anyway, even in datacentres cooled by big chiller air conditioning systems, says Jud Cooley, senior director of engineering for a Sun Microsystems water-cooled product known as the Modular Datacenter. With computer room air conditioning systems, chillers are placed by the racks, and from the resulting hot air, heat is moved into liquid and pumped outside the building, Cooley says.
"Every datacentre does move water. We get that water closer to the point where you're actually generating heat," Cooley says. Sun isn't bringing water into the servers just yet. "What comes along with it is the need to bring water into every server, and all the plumbing issues," Cooley says. "Sun does not have a product in this space right now. But every vendor is looking into this."
Rather than put water inside the servers, Sun placed the water cooling technology in the Modular Datacenter, which is essentially a computer room in a large box that's been generally available since the end of January. Standard servers are placed inside the box, bringing them closer to water and reducing the amount of hot air that needs to be moved around a data center.
IBM did Sun one better with the System p5 575, which uses the Power6 chip. A cabinet that holds 14 servers pumps cold water through pipes onto a little copper plate that sits right on top of the chip, Seminaro explains. The cabinet contains 7.2 gallons of purified water, which is endlessly recirculated, remaining in the cabinet for the life of the product. A connection to a building's plumbing system is necessary for the heat to be transferred from the product to the customer's water pipes.
Naturally, customers may worry about a leak inside the system ruining their expensive processors. IBM uses a corrosion-resistant water distribution system to minimise that risk, and water is kept at a temperature that causes no condensation, Seminaro says. Leaks are possible, he acknowledges. But IBM is confident enough that it plans to expand water cooling to more servers.
"We're evaluating it now," Seminaro says. "We will definitely put it into more of our platforms. We started here because in the world of technical computing there is a real desire for a tremendous amount of compute capacity in a given location."
The System p5 575 with water cooling has 448 processors and is capable of performing trillions of operations per second. Water is about 4,000 times more efficient than traditional air cooling, IBM notes in a video on its website. Actual energy savings aren't quite that impressive. The number of air conditioning units can be reduced by 80 percent, and energy consumption for datacentre cooling is reduced by 40 percent, IBM says. Big Blue says its scientists are working toward "direct on chip" water-cooled systems that will be even more efficient by bringing water all the way to the hottest parts of a computer.
While IBM recently rediscovered water cooling, HP has been researching water cooling since 1999 and began offering an HP-branded system about four years ago, says Wade Vinson, HP's power and cooling architect.
Like Sun, HP is not bringing water directly into the servers. HP's Modular Cooling System is a water-cooled rack that gets water from one of three sources: a direct connection to the building's chilled water system, a dedicated chilled water system, or a water-to-water heat exchanger unit connected to a water system.
Vinson says the "jury's still out" on whether the complexity of having water inside the server is worth it. Air cooling inside servers is easy and less risky, he says, and customers can still gain 30-percent energy reductions by using the HP water-cooled rack. At the National Center for Supercomputing Applications in Illinois, Pennington is eagerly anticipating the arrival of Blue Waters, the 95,000-square-foot petascale computing facility on the University of Illinois campus.
The NCSA, which provides computing resources to scientific engineers and industrial users, uses a 9,600-core machine that's known as "Abe" and is based on Dell blade servers. It requires three floors: the bottom floor for air handling units, the second floor for servers, and the third to handle return air flow, Pennington says.
With the 200,000-core water-cooled system, there will be a mechanical room under the server floor, with the third floor left over for office space. The datacentre will connect to the building's plumbing infrastructure, and from there to a large chilled water plant maintained by the university.
"We spent a significant amount of time working with people on campus and with companies, understanding how to make a water cooling room efficient," Pennington says. "I wouldn't say it's simpler [than air cooling]. It's just a different set of engineering challenges."
Pennington expects to use servers based on IBM's in-development Power7 chip, successor to the current Power6 microprocessor for high-end Unix servers, which IBM unveiled one year ago. Water cooling will be used inside the servers, Pennington notes, but the NCSA project involves considerably more work to optimise the efficiency of water cooling. Including staff, the machine room and computers, Blue Waters will cost US$208 million. "We're providing water to the racks. IBM is doing all the other plumbing within the racks," he says.
The NCSA isn't committed to using only IBM servers. If other suitable water-cooled machines come along, the organisation will buy them, Pennington says. What has been clear to Pennington for several years is that water cooling is the only viable technology that can provide the kind of power density the NCSA seeks in the foreseeable future.
After expanding the current machine room eight years ago with dense air-cooled systems, "we looked at what was coming in the next decade," Pennington says. "It was clear to us that water cooling was going to have to be a significant technology for us to think about."