Corporate IT managers who never seem to have enough CPU power, disk space, bandwidth or funding might take comfort from US climate scientists. Computerworld's Gary H. Anthes recently talked with two of them and learned that even having access to the world's most powerful information systems is not enough.

Patrick Heimbach is a research scientist in physical oceanography at MIT, and James Hack heads up climate modelling at the National Center for Atmospheric Research. Both scientists use their own organisations' computer systems, as well as those at supercomputer centres around the US.

Q: What are you working on at MIT?

A: Heimbach: We are trying to see if we can simulate, if we can understand, what the ocean has been doing over the last couple of decades. Are we heading toward a warmer world? Is [warming due to] internal variability of the [oceanic and atmospheric] system, or is there something we are doing to the system?

Q: Do you have the computational power to do that?

A: Heimbach: What we ultimately would like to run we can't currently fit on any computer. We would need on the order of 20,000 processors, and probably two orders of magnitude faster processors. Each supercomputer centre allocates a certain amount of computing time to a specific group. So we have to size down the problem we are addressing for that specific machine.

Q: So it seems you must beg, borrow and steal computer resources for this work.

A: Heimbach: We have to find the cycles where we can find them. But even for the machines that are available, if we really wanted to go to the actual [spatial] resolutions that we need, we probably would not be able to fit those problems on those machines. Give us any machine, and we can immediately fill it with an interesting problem, and we'll still have the feeling we are limited.

Hack: Climate and weather applications... push high-performance computer technology. A decade ago, global climate applications benefited from the extraordinary memory bandwidth of proprietary high-performance architectures, like the parallel vector architectures from Cray and NEC. As scientific computing migrated toward the commodity platforms, interconnect technology, both in terms of bandwidth and latency, became the limiting factor on application performance and continues to be a performance bottleneck.

Q: Is the Internet adequate for connecting you to the supercomputer centres you use around the US?

A: Heimbach: Transferring several terabytes of data from NASA Ames [Research Center] to MIT is just overwhelming to do in a reasonable time. As of a year ago, we were limited by the 100Mbit/sec. bandwidth of the network that connects our department to the outside world. The best sustained rates that could be achieved were on the order of 55Mbit/sec. That would bring us to a transfer time of 1.7 days per 1TB of data.

We now have better connection to the high-speed Internet2 Abilene network, with its 10Gbit/sec. cross-country backbone. The bottom line still is we need much higher bandwidth, less network congestion and smart transfer protocols, such as the large-files transfer protocol [bbFTP], that minimize CPU load.

Hack: The so-called sneakernet continues to provide the best bandwidth for moving large data sets between computing centres -- shipping data on tapes or disk via overnight services. We are engaged in some emerging computational projects that will generate hundreds of terabytes per experiment. Moving that data is a significant challenge. Storage and access to that data for analysis purposes is a comparably challenging technical task.

Q: How adequate is supercomputer capacity in the US for scientific research?

A: Hack: One could argue that there will never be enough supercomputing capacity. In [a] sense, scientific progress is paced by the availability of high- performance computing cycles. And the problem becomes more acute as the need to address nonlinear scientific problems in other disciplines, like material science, computational chemistry and computational biology, continues to grow.

Q: There remains some controversy about global warming. Could better climate models and/or better computer technology help resolve that?

A: Hack: For many scientists, it's not a question of whether the planet will warm, but more a question of how much the planet will warm and what form the regional distribution of that warming will take. Answering... these questions will require additional levels of sophistication in global climate models, such as improved resolution and extending existing modelling frameworks to include fully interactive chemical and biogeochemical processes. These kinds of extensions are... extremely expensive in computational terms. We will require a minimum of a twenty-five-fold improvement in computational technology to enable the next-generation model [in] three to five years.

Heimbach: You need to run coupled ocean-atmosphere simulations over 10 to 100 years. We think that these models, and the underlying model errors, are still such that we need to do more basic research to understand the errors better. That's what we are trying to address.

Q: Should the federal government be doing more to fund supercomputer research and supercomputer capacity?

A: Hack: The federal government should treat supercomputer technology in the same way that it treats other strategically important areas, like those related to national defence and national security. It's too important to the nation's scientific and economic competitiveness to be left to chance.