This is the first in a short series of articles on multi-core processing that Techworld will be publishing over the next few days.
In 1965, when he first set out what we now call Moore's Law, Gordon Moore (who later co-founded Intel) said the number of components that could be packed onto an integrated circuit would double every year or so (later amended to 18 months -- and recently amended again.
In 1971, Intel's 4004 CPU had 2,300 transistors. In 1982, the 80286 debuted with 134,000 transistors. Now, run-of-the-mill CPUs count upward of 200 million transistors, and Intel is scheduled to release a processor with 1.7 billion transistors for later this year.
For years, such progress in CPUs was clearly predictable: Successive generations of semiconductor technology gave us bigger, more powerful processors on ever-thinner silicon substrates operating at increasing clock speeds. These smaller, faster transistors use less electricity, too.
But there's a catch. It turns out that as operating voltages get lower, a significant amount of electricity simply leaks away and ends up generating excessive heat, requiring much more attention to processor cooling and limiting the potential speed advance -- think of this as a thermal barrier.
To break through that barrier, processor makers are adopting a new strategy, packing two or more complete, independent processor cores, or CPUs, onto a single chip. This multi-core processor plugs directly into a single socket on the motherboard, and the operating system sees each of the execution cores as a discrete logical processor that is independently controllable. Having two separate CPUs allows each one to run somewhat slower, and thus cooler, and still improve overall throughput for the machine in most cases.
Designed for speed
From one perspective, this is merely an extension of the design thinking that has for several years given us n-way servers using two or more standard CPUs; we're simply making the packaging smaller and the integration more complete. In practice, however, this multi-core strategy represents a major shift in processor architecture that will quickly pervade the computing industry. Having two CPUs on the same chip rather than plugged into two separate sockets greatly speeds communication between them and cuts waiting time.
The first multi-core CPU from Intel is already on the market. By the end of 2006, Intel expects multi-core processors to make up 40 per cent of new desktops, 70 per cent of mobile CPUs and a whopping 85 per cent of all server processors that it ships. Intel has said that all of its future CPU designs will be multi-core. Intel's major competitors -- including AMD, Sun Microsystems and IBM -- each appear to be betting the farm on multi-core processors.
Besides running cooler and faster, multi-core processors are especially well suited to tasks that have operations that can be divided up into separate threads and run in parallel. On a dual-core CPU, software that can use multiple threads, such as database queries and graphics rendering, can run almost 100 per cent faster than it can on a single-CPU chip.
However, many applications that process in a linear fashion, including communications, backup and some types of numerical computation, won't benefit as much and might even run slower on a dual-core processor than on a faster single-core CPU.
Power and performance
Two users who have tested AMD's Opteron dual-core chips and moved them into production say they are getting performance that's close to double the processing performance of a single chip.
Neal Tisdale, vice president for research and development at NewEnergy Associates, an Atlanta-based firm that conducts intensive analytical testing for the natural gas industry, has been using the Opteron dual-core chips supplied on systems built by Sun.
Tisdale says Sun is putting in an address decoder for each CPU, which increases throughput on his four-way machines. Address decoding helps a CPU access memory more efficiently.
But some vendors limit the number of address decoders on the chip, and that crimps performance, says Tisdale. "It actually depends what [server] vendor you buy from as to how much dual-core does for you," he says.
Another industry that sees chip performance as a competitive edge is travel. Customers want to choose from hundreds of flight and hotel options when booking travel online, and it's the system's task to deliver them quickly, says Alan Walker, vice president of technology prototyping and integration at Sabre Holdings. The Southlake, Texas-based company operates Travelocity and other online travel-booking services.
"You can never be fast enough or cheap enough for this type of processing," says Walker. Sabre has always used Opteron chips and began testing the dual-core versions on HP's ProLiant servers, which were shipped to its outsourcer, Electronic Data Systems.
The IT team spent a week testing the system and put it into production two days before AMD officially released the chip in April. "There were no installation issues," Walker says. "They put the same system image on the dual-core machine, and it booted without any problems."
Walker says he anticipates a bright future for dual-core Opteron chips, as long as the software scales reliably across a large number of systems. "These dual-core Opterons are way ahead of anyone else," he says.
Read the second article in this series tomorrow. Kay is a Computerworld contributing writer in Worcester, Mass. You can reach him at [email protected].
Find your next job with techworld jobs