The San Diego Supercomputing centre's $32 million data centre expansion, slated for completion next July, is designed to be energy efficient from the ground up.
The 80,000-square-foot building will double the size of the SDSC's facilities; besides an additional 5,000 square feet of data centre space, the expansion will house classrooms, offices, meeting rooms and a 250-seat auditorium.
Under development since 2003, the building has an energy-efficient displacement ventilation system that uses the natural buoyancy of warm air to provide improved ventilation and comfort; exterior shade devices, such as awnings, to control temperatures by blocking the sun; and natural ventilation (the windows in the building will open) to save energy.
The SDSC also is carefully selecting the IT equipment that will populate the data centre to help lower overall energy consumption and save on operational costs.
"To marry the energy efficiency of [the building] with the IT systems and understand what impact they are putting upon each machine room" will be crucial to the data centre expansion's success, says Gerry White, director of engineering services with the design and construction office at the University of California at San Diego (UCSD), which is home to the supercomputer centre.
The decision to build an energy-efficient data centre came down to a matter of need, says Dallas Thornton, the SDSC's IT director. Funded by the National Science Foundation and associated with the University of California network, the SDSC provides facilities for academic research on such data-intensive topics as earthquake simulations and astrophysics. It offers users more than 36 teraflops of computing resources, as well as two petabytes of disk and 25 petabytes of archival tape capacity on-site.
While the high-performance computers and related equipment provided by the SDSC for this research are augmented by computers that individual projects supply, all require a significant amount of power and cooling, Thornton says. The present data centre has a constant load of about 2 megawatts of power, which equals roughly enough energy for 2,000 residential homes. "Being on the cutting edge of technology, we've seen a lot of [energy] load before a lot of other folks, so we've had to do something [about energy consumption] just to stay in business," he says.
An easy sell
Thanks to the centre's intense power requirements and the mandates specified in Title 24 of the California Code of Regulations, which sets the standard for energy-efficient new-building construction, Thornton and his colleagues in engineering and facilities didn't face much opposition when they tried to persuade upper management that the data centre expansion should be energy efficient. And the fact that the new data centre will save significantly on operating costs made the convincing even easier.
"Saving money is huge -- it all comes back to the cost of power, so for us [saving money] with a green data centre really sits well" with upper management, Thornton says, although the logic behind energy efficiency should be clear to any executive team. "The No. 1 cost in running a data centre is power, so if you can create ways to reduce that footprint, it should be an easy sell."
Because much of the data centre's cost savings will come from the design itself, Thornton says he can't predict what kind of operational cost savings the centre will gain by buying IT equipment that consumes less power than traditional computers do. Some savings are already clear, however; the centre estimates it will save 40 percent on operating the new building vs. a traditional building because of its plan to cogenerate power locally by using waste steam to power steam chillers that will help cool the data centre, he says.