If you doubted that the principles of data centre cooling can be adhered to, no matter how big the centre or how much power it's using, take a look at IBM's MareNostrum supercomputer. It lives at the Barcelona Supercomputing Center supercomputing centre in sunny Barcelona, which is part of the Universidad de Barcelona. First boot was September 2004.

It's overseen by IBM's eScience chief technologist, Dr Juan Jose Porta. Based at IBM's labs in Boeblingen, Germany, he spends half his time at the supercomputing centre. "I prefer it, the weather's much better here," he quips.

Applications

Projects that MareNostrum is working on include climate change study for the Spanish government, which is facing desertification in the south, bio-chemical applications such as decoding virus DNA to accelerate vaccination programmes, and seismic research.

It's also likely to be used in the event of one of the biggest hazards in this warm climate: fire. In order to map where a big fire will move next, you need to be able to model air-flows very quickly. "It's no use if the fire outruns your computer," says Porta. Applications include other life and earth sciences.

Of blades and interconnects

The result of a joint project between the Spanish Government and IBM, what makes this 160 square metre system interesting is not just the amount of power it uses -- 500kW -- but the computing power it packs. Its 4,564 clustered processors churn out 40 teraFLOPS. Porta told Techworld that the machine has about the same amount of compute power as the famous Japanese supercomputer but occupies only one-third of the space.

The computer's blades pull data from 140TB of SATA drive-based RAID-configured storage, and are interconnected using Fibre Channel. It's the densest optical interconnect system in the world, according to Porta, who adds that IBM considered using Infiniband for interconnections but the finger-thickness of the cables "would have created big cooling problems".

Gigabit Ethernet connects the computer to the outside world. It runs SuSE Linux 9.2 and is currently crunching mainly scientific numbers.

Blowing hot and cold

To the untutored eye, it looks pretty much like a conventional data centre. The serried racks are packed with off-the-shelf, Power processor-based blade servers and storage, weighing a total of 40,000Kg, are housed in an all-enveloping, airtight, cooled glass cage. They sit on a one-metre-high suspended floor -- the unusually high amount of underfloor space has a purpose, as we shall see.

Porta is proud of the fact that, despite its high power density, the Barcelona setup is highly thermally efficient. Housed in a converted church, the racks are laid out, from a cooling perspective, in an alternating hot aisle-cold aisle configuration. The cool air enters the rack space at 14 degrees Centigrade and exits at around 38 to 40 degrees. It then moves into one of a number of cooling aggregators -- effectively industrial-strength cooling systems. It's their task to pump waste air to a series of compressors. Working much like a standard fridge, they extract the heat from the air, readying it for recirculation back into the supercomputer. Sensors in each rack ensure that, if the system starts to overhear, the fans first ramp up to top speed. Should a fan fail, redundant fans are ready to take up the load.

The church's high roof is ideal, says Porta because it allows air to circulate properly. However, the attention to detail does not, perforce, extend to providing facilities for human waste because -- apparently -- to put a loo in a church building is unlawful.

The computer vents its waste heat from a huge, floor-level grid next to the building. Stand on it in the hot summer sun and the blast is both hot and strong enough to be uncomfortable. Aware of the poor environmental image this paints, especially given the environmental work the supercomputer performs, Porta says it is hoped that the waste heat will be pumped underground for use a geothermal pump, although the engineering has yet to be designed.

"We're working on it," says Porta. "We didn't have time to build it into the original design -- from concept to first boot, we got the computer up and running in 12 months."

Industry-standard supercomputing

For the future, Porta says that the big advantage of MareNostrum is that it is built from industry standard components. With no proprietary bits, the advantages are manifold.

First and foremost, everything is much cheaper. Porta was unable to say how much the machine cost, though he did say that it was not cheap. It took a core of 25 people to build it, with other specialists being called in as required. Secondly, it means that the computer can be upgraded easily which, given the pace at which components become obsolete, is a big plus. And since this is likely to happen fairly often, leaving a metre -- double the normal underfloor height -- to allow for frequent re-configuration of cabling and cooling systems.

It was the right design decision from the outset, reckons Porta. The decision also means that the system is easily reproducible: to make another, just means buying the same bits, which wouldn't have been possible had IBM gone down the proprietary route.

One downside to the industry-standard approach is that some components are not designed for the volumes of work to which they're being put. Asked if he'd faced any emergencies since first boot, Porta says that the worst moment was when two SATA drives in a single RAID failed within five minutes of each other. "We learned from that," he says.

For the future, Porta is looking forward to another, similar system being reproduced in Madrid. It seems odds-on that he will enjoy the weather there, too.