How did high-performance computing snag such stunning buzz? After all, until recently, HPC was a niche technology with limited applicability and even fewer practitioners. Yet mainstream IT folks have been salivating over it for years.
My suspicion is that HPC permeated the public consciousness thanks to [email protected], a grid application that borrowed cycles from idle computers across the Net and put them to work looking for signs of intelligent life in the universe. Started in 1999, [email protected] had an almost irresistible appeal: That your computer might find ET just by crunching radio telescope data in its spare time. Back then I enrolled my PC in the grid, and so did many of the other geeks I knew.
Indeed, [email protected] and similar grids -- essentially HPC projects that harvest cycles from heterogeneous machines across geographic boundaries -- still exist, especially in academia and government research agencies. But as mainstream businesses begin to see practical uses for HPC, they are increasingly relying on dedicated clusters of dual-core processors, not on grids, to get the job done.
Grids were a romantic notion; many people saw them as the future, says Leon Erlanger, author of our cover story, High-Performance Computing: Supercharging the Enterprise. But there are all kinds of issues -- security, quality of service, availability, politics among them. Today grids are primarily a demonstration of what you can do, as opposed to a blueprint for standard enterprise HPC.
And make no mistake about it: HPC is moving aggressively into the enterprise. As it turns out, businesses have a real need for compute-intensive operations. Plus the economics are favourable, given the popularity of cluster-capable operating systems such as Linux and Windows 2003 Server, and the proliferation of commodity dual-core chips.
The forecast: Mainstream implementations are about to catch up with the buzz. Now if we could just locate ET...
Find your next job with techworld jobs