A US Army supercomputing centre with a legacy that dates back to the Electronic Numerical Integrator and Computer (ENIAC), which launched in 1946, is moving to Linux-based clusters. The move will more than double its computing capability.
The Army Research Laboratory Major Shared Resource Center (MSRC) is buying four Linux Networx Advanced Technology Clusters, including a system with 4,488 processing cores, or 1,122 nodes, with each node made up of two dual-core Intel Xeon chips. A second system has 842 nodes. In total, the purchase will increase the MSRC's computing capability from 36 trillion floating-point operations per second to more than 80 TFLOPS, Army officials said.
The decision to move into commodity clusters was not made quickly, said Charles J. Nietubicz, director of the MSRC.
The lab held a symposium in 2003 to explore the issue and began running a small, 256-processor cluster system. "We saw that cluster computing was this new kid on the block and was interesting," said Nietubicz. But the centre wasn't about to start scrapping its other systems, made by Silicon Graphics, Sun and IBM, he said.
The MSRC isn't disclosing the purchase price, but IDC analyst Earl Joseph said the average cost for a cluster works out to about $2,000 per processor, compared with $12,000 per processor for a RISC-based system.
Nietubicz said other vendors will need to improve their systems' performance or "reduce the price to provide equivalent performance."
Linux Networx builds systems using both AMD and Intel chips. The MSRC sale is the vendor's largest supercomputing order ever.
Nietubicz said he was convinced that clusters can work based on the MSRC's ability to get certain computational codes used in fluid dynamics, structural mechanics and other processes to scale to multiple processors mostly by using Message Passing Interface protocol-based code. MPI is used to create parallel applications. Clusters accounted for about half of the total $9.1 billion in sales in the high-performance computing market last year, according to IDC.
A major consideration for moving to clusters is whether the high-performance software can scale to multiple processors. Systems that have been written in MPI can do so, but Joseph said companies that use off-the-shelf software usually find it difficult to make the switch because commercial applications don't use MPI. Government labs and universities, which own their own code, can usually invest the time to convert their code into MPI, he said. Nietubicz doesn't see any major limitations to clusters, and while not all code can scale on clusters, he said the same problems occurred as the centre moved from vector to shared memory.