Moore's Law will continue to be a viable business proposition for many years to come, because when it is no longer possible to scale in two dimensions, chip manufacturers will simply grow in the third dimension, according to Steve Pawlowski, chief technology officer of Intel's Digital Enterprise Group.

Moore's Law is a rule of thumb in the history of computing hardware whereby the number of transistors that can be placed on an integrated circuit doubles every 18 to 24 months. The law is named after Intel co-founder Gordon E Moore, who described the trend in his 1965 paper, “Cramming more components onto integrated circuits”.

His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. However, the law cannot continue indefinitely, because transistors will eventually reach the physical limits of miniaturisation at atomic levels.

Last month, scientists at the University of New South Wales, Australia claimed to have created the first transistor from a single phosphorous atom using near-atomic precision, which could keep development of processors on track with Moore's Law until at least 2020. However, most industry commentators expect the law to reach its limit between 2013 and 2018.

Speaking to Techworld at the Xeon E5 launch event in London last week, Pawlowski asserted that while the number of transistors cannot continue to grow forever in two dimensions, the law can be extended by integrating multiple wafer-thin layers of electronic-grade silicon – known as dies – in a stack.

“I'm always asked, is Moore's Law going to end? Well maybe some day, but I plan on working for the company for another 10 years, and I don't see us not continuing on the Moore's Law trend in that time frame,” said Pawlowski. However, a key challenge will be to remove the heat generated as chip volumes become smaller and smaller.

“My guess is there'll be some potential trade-off between the cost of cooling the chip and performance optimisation,” he said. “We always run financial analysis to work out whether we're keeping up with the two-year cadence, and we will run financial analysis to determine when it makes sense to stack multiple dies, versus trying to grow in one dimension.”

Focus on efficiency

Intel operates what it describes as a tick-tock model. Every two years, the company aims to increase transistor density in line with Moore's Law, known as a tick, resulting in higher performance levels and greater energy efficiency. In alternate years, known as tocks, Intel uses the previous year’s manufacturing process to introduce a new microarchitecture, which enables new capabilities.

In 2004, Intel and the other microprocessor manufacturers found that they could no longer produce faster processors, because the chips produced too much heat, so instead they began producing chips with multiple cores (processors) to increase total performance.

Intel's recently-released Xeon E5 chip falls into a tick cycle. It uses the same Sandy Bridge architecture that was released last year, but supports up to eight cores, providing an 80 percent improvement in performance and 50 percent improvement in energy efficiency compared to the previous Xeon 5600 series chips, which had six cores.

“Performance is still a key metric, but so is energy efficiency,” said Pawlowski. “My very first customer meeting after coming back from the labs was with a supercomputer company. I sat down, and before I could even give them my business card and introduce myself they said, 'We hate your multi-core strategy.' I asked why. They said, 'Because we want you to improve memory bandwidth and I/O bandwidth as long as you're putting more cores in the processor.'”

Pawlowski explained that, for customers running high performance computing (HPC) environments, such as the European Nuclear Research Organisation (CERN) near Geneva, power can often be limited, and this has a knock-on effect on performance. By integrating the memory controller onto the chip, customers can make more efficient use of the memory bandwidth and reduce latency.

“Integrating the memory controller onto the chip gives us greater influence over the entire system footprint, not just the CPU,” he added. “Working with network vendors, we can do a better job of power managing but over time it may make sense to have more network functionality closer to the processor to allow us to do finer grain power control.”

Pawlowski said that Intel's E5 processors manage power more intelligently than chips based on the Nehalem architecture, due to a new version of the Turbo Boost overclocking mechanism.

“In a very low utilisation scenario, we'll slow the processor down, turn down the memory unit – we'll save power across the whole platform and we'll take credits for it,” he explained. “As those credits build up, and we come to a high utilisation environment, the power control unit will kick in and push the machine way above the thermal design point for a short period of time, until we can guarantee that it's reliable, and start dropping down again.”

This results in a 12 to 14 percent improvement in performance, compared to just a two to three percent improvement with the previous version of Turbo Boost on Nehalem, said Pawlowski.

The Big Data battle

Many of the new features that have been introduced with the E5 family of processors are aimed at tackling Big Data. Pawlowski said that, as Intel's CTO for the data centre, he is constantly asked about the cloud and where Big Data should be stored, but he warned that “one cloud does not fit all”.

For example, he said that the computing centre at CERN generates more raw data in a minute than most organisations do in a year, so they have to do a lot of filtering up front. “They could be throwing away good science because they simply don't have the capability to store that volume of data,” he said.

At the other end of the spectrum, oil and gas companies that take seismic measurements in the field may want to keep that data forever because, as machines become more powerful and the fidelity of the models improves, they can use it to find oil where they couldn't see it before.

Pawlowski said that data will continue to drive everything that Intel does. However networks are going to have to become more sophisticated and more distributed in terms of their control structure, in order to deal with some of the challenges posed by Big Data.

“Big Data has me worried,” he said. “I've had some conversations with the military where their soldiers are in the field. They're actually part of the network – think of them as the end device. They're collecting data about the battle situation, and they're all being integrated into a larger collection of information, giving commanders in the rear echelons a realistic view of the field, so they can give the appropriate data to the soldiers in order to help them do their job.

“If you happen to lose a soldier you lose a point of the network, you've got to reconfigure and re-heal that network. That all has to be done ad hoc. There isn't going to be some grand node manager that figures that out, because those things are going to be so remote, you've got to be able to have some intelligence between those devices to be able to handle it. Having something this centralised would just not scale to where these things are going to go in time.”

Xeon vs RISC

Pawlowski believes that, with the improvements in Intel's latest processors, Xeon is in a strong position to step up its attack on the RISC-based Unix market. RISC, or reduced instruction set computing, is a CPU design strategy based on the philosophy that simplified instructions can provide higher performance.

“I was in the old RISC/CISC wars, where everybody said CISC (complex instruction set computing) is dead, RISC is going to take over,” he said. “With Moore's Law, and the ability to put more capability on a given die area, there is no reason why an architecture like x86 doesn't have the robustness to continue to support that. The capability and the reliability is there.

“So now we're able to offer greater RAS functionality for a lower cost than what people were used to in the high-end space,” he added. “When you tie in performance with reliability, it now becomes a very compelling argument as a RISC replacement.”

He said that Xeon chips have become much more capable in terms of overall raw performance, and the differentiation between Xeon and RISC in terms of features is also fading away. Where, traditionally, there would have been trade-off between desktops and servers, now Intel is able to build server-specific parts and client-specific parts onto the same chip.

“Xeon is an extremely capable part for mission critical systems,” said Pawlowski. “We're building more and more of that capability going forward because people are willing to pay for it and, in the server space, we're able to differentiate in a different way from in the core space.”

Steve Pawlowski has worked for Intel since 1982 and led the design of the first Multibus I Single Board Computer based on the 386 processor. He was a lead architect and designer for Intel's early desktop PC and high performance server products, and was the co-architect for Intel's first P6 based server chipsets. As well as being Digital Enterprise Group CTO, he is also an Intel Senior Fellow.