Engineers at CERN have completed a 100-site grid, designed to support experimental work on the Large Hadron Collider. The grid, which is based in 31 countries, is claimed to be the world's largest international scientific grid.

Inside the collider, photon beams travelling in opposite directions will be accelerated to near the speed of light and steered into each other using powerful magnets. Scientists hope to analyse data from the collisions to uncover new elementary particles, solve riddles such as why elementary particles have mass, and get closer to understanding how the universe works.

The photon collisions will produce an estimated 15TB of data each year. The role of the grid is to link together a vast network of computing and storage systems and provide the scientists with access to the data and processing power when they need it.

The grid sites involved are mostly universities and research labs as far afield as Japan and Canada, as well as two HP data centres. The sites are contributing computational power from more than 10,000 processors in total, and hundreds of millions of gigabytes in tape and disc storage.

For all the talk about grids from big IT vendors, virtually no suitable commercial tools were available to build the grid's infrastructure, according to project leader Les Robertson. Much of the data will be stored in Oracle databases, and a few of the sites use commercial storage systems, but the hardest part - building the middleware to operate the grid - was left largely to Robertson and his peers.

"It's surprised me a bit that there haven't been more commercial tools available to us. What we're building is not very specialized; we're just creating a virtual clustered system, but on a large scale and with a very large amount of data," he said.

Instead, CERN based its grid on the Globus Toolkit from the Globus Alliance, adding scheduling software from the University of Wisconsin's Condor project and tools developed in Italy under the European Union's DataGrid project. "This stuff comes from a lot of different places, it's very much a component-based approach," Robertson said.

The middleware serves two main functions. One is to make the grid look as much as possible to users and applications as a single large computer. "The other side is how you operate it - there's a lot of work being done on monitoring software that lets you see what's actually happening, how things are behaving. This is very early days for the operations side of it," Robertson said.

One reason there are few commercial tools that are useful to CERN may be that what the big vendors are peddling as grid computing is not really grid computing at all - at least, not the way CERN defines it.

"For us, grid computing is a way of interconnecting computing capacity across multiple sites that lets everyone get access to the capacity when they need it, and where you really do send your work to where the data is and where you can move the data from one site to another," Robertson said.

The commercial offerings have so far been geared more toward building grids within enterprises, he said, for which clusters may be a more accurate term.

"On the commercial side there's more interest in Web services, and even there we thought there'd be some basic things coming out that we could use, but so far there are not," he said.