Fundamental research on how to make computer hardware more powerful goes back 50 years or more, but many of the traditional methods have nearly reached their limits. Now, researchers moving in bold new directions may be setting the course of IT for decades to come.

There are literally dozens of grand challenges that scientists and economists are attacking, ranging from societal issues to technical advances. Here, we take a look at the challenges in processor performance and miniaturisation.

Processor performance
In 1965, Gordon Moore, a co-founder of Intel Corp., observed that the transistor density of semiconductor chips doubled every 18 to 24 months, so new PCs are roughly twice as powerful as their predecessors. Most researchers expect Moore's Law to hold true for at least another 10 years.

But processor performance is reaching its limits, especially in terms of the amount of power that's dissipated by transistors. "As Moore's Law has gone along, it becomes harder and harder to get devices to work at very small scales using CMOS," says Phil Kuekes, senior computer architect in quantum sciences research at HP Labs in Palo Alto.

Keeping it cool
It's becoming increasingly difficult to cool chips economically, says Bijan Davari, an IBM fellow and vice president of next-generation computing at IBM's Thomas J. Watson Research Center in Yorktown Heights, N.Y. The power-density limitation for air-cooled processors is on the order of 100 watts per square centimetre of chip area, says Davari, "and it gets a lot more expensive to inject liquid into the backside of a chip, which could permit cooling to about twice that power density."

Depending on voltage ranges and other factors, Itanium and Pentium 4 chips burn 100W to 150W of power, Davari adds. "For future processors, we'd like to reduce the power dissipation into tens of watts per processor core, and more importantly, we need to contain the power density, or power dissipation per unit area," he says. Given those restrictions, researchers are now focusing on new approaches to improving processor performance, such as placing multiple processor cores on a single chip, in addition to taking new approaches to chip architecture and design.

For example, IBM is evaluating -- and in some cases already implementing -- the use of new materials on a chip, such as copper, silicon-on-insulator and silicon germanium, to improve the performance of devices, lower the power density or provide a combination of the two. In addition, the materials allow researchers to fabricate smaller chips that consume less power, Davari says.

And at Austin-based Sematech Inc., researchers are working with so-called low-k materials, which allow metal circuits to be packed closer together on a chip with less risk of electrical signal leakage, says Randy Goodall, associate director of the International Sematech Manufacturing Institute.

To help increase power dissipation, IBM is testing cooling gels that would prevent hot spots on a chip. IBM is also looking into the potential for water-cooled microprocessors, says Davari.

To help improve bandwidth inside chips, researchers at MIT are working on an effort called the Raw Architecture Workstation Project, where multiple "adders and subtractors" are placed throughout the chip along with storage to provide "neighbourhoods of processing, rather than traversing the entire chip to move data," says Anant Agarwal, a professor in electrical engineering and computer science at MIT. "It's like going to a grocery store closest to me rather than going across the county."

Meanwhile, researchers at Los Alamos National Laboratory in New Mexico and elsewhere are developing parallel processing systems that would employ tens of thousands of processors and deliver up to 1,000 trillion floating-point operations per second of performance by as early as 2008.

David Patterson, a professor of computer science at the University of California, Berkeley, says he believes researchers will be able to double the number of processors they can place on a single chip every three to four years. "But that's predicated on the amount of software that can make use of it," he warns.

"Programming big parallel computers to exploit their potential has been the show-stopper," says Burton Smith, chief scientist at Cray Inc. in Seattle. "As computers grow from hundreds of processors to tens of thousands of processors in the next few years, this problem is going to increase dramatically."

Cray, IBM and Sun are developing parallel programming languages. Smith says he expects that Cray will make its parallel programming language, called Chapel, available in an open-source version in 2005.

Chip miniaturisation
Today, chips can be made with features as small as 70 to 90 nanometres wide, with hundreds of millions of transistors on them. Last year, HP Labs developed a 64-bit RAM that sits on a single square micron, says Phil Kuekes.

One way to deal with the heat generated by ultra-dense circuits is to reduce the power supply voltage, says Anantha Chandrakasan, associate director of the Microsystems Technology Laboratories at MIT. Over the past 10 years, designers have been able to create circuits that operate from 1V power supplies, and Chandrakasan says designers should reach the half-volt threshold within five years.

But to scale the power supplies down, the device thresholds must also scale down, which causes power leakage to exponentially increase, says Chandrakasan. To contend with these problems, he expects to see increased emphasis on gating the power supply -- or cutting off the power supply network when the circuit is idle, using power switches on the chip die -- and using multiple threshold devices to minimise power requirements.

Tom Theis, director of physical sciences at IBM Research, says he believes that further refinements in ultra-thin silicon and ultra-thin gate insulators could allow researchers to shrink device-channel lengths from roughly 45 nanometres today to 15 to 20 nanometres within 10 years. He also expects to see some high-performance chips dissipating at least 10 times the power of today's chips, as well as chips with 10 times the memory density of today's through the use of optical lithography tools that will arrive in 10 years.