Multicore chips will help meet growing computing demands, but it could create more challenges for programmers writing code, according to a senior Intel executive.
As technology develops at a fast rate, a challenge for developers is to adapt to programming for multicore systems, said Doug Davis, vice president of the digital enterprise group at Intel, during a speech at the Multicore Expo in Santa Clara, California. Programmers will have to transition from programming for single-core processors to multiple cores, while future-proofing the code to keep up-to-date in case additional cores are added to a computing system.
Programming models can be designed that take advantage of hyperthreading, which enables parallel processing capabilities of multiple cores to boost application performance in a cost-effective way, Davis said. Intel is working with universities and funding programs that will train programmers to develop applications that solve those problems, Davis said.
Intel, along with Microsoft, has donated $20 million to the University of California at Berkeley and the University of Illinois at UrbanaChampaign, to train students and conduct research on multicore programming and parallel computing. The centres will tackle the challenges of programming for multicore processors to carry out more than one set of program instructions at a time, a scenario known as parallel computing.
Adapting legacy applications to take advantage of multicore processing is also a challenge coders face, Davis said. Writing code from scratch is the ideal option, but it can be expensive. "The world we live in today has millions of lines of legacy code ... how do we take legacy of software and take advantage of legacy technology?" Coders could need to deliver what's best for their system, Davis said.
Every major processor architecture has undergone quick changes because of the rapid rate of change as described by Moore's Law, which states that the number of transistors in a given space of silicon will double every two years, but now the challenge is to deliver performance within a defined power envelope. Power consumption is driving multicore chip development, and programmers need to write code that works within that power envelope, Davis said.
Adding cores to a chip to boost performance is a better power-saving option than cranking up clock frequency of a single-core processor, Davis said. Adding cores increases performance, but cuts down on power consumption.
In 2007, about 40 percent of desktops, laptops and servers shipped with multicore processors. By 2011, about 90 percent of PCs shipping will be multicore systems. Almost all of Microsoft Windows Vista PCs shipping today are multicore, Davis said.
Intel is also working on an 80-core Polaris chip, which brings teraflops of performance.
"We're not only talking about terabit computing, but the terabyte sets [of data] we can manage." Davis said. Users are consuming and storing tremendous amounts of data now, and in a few years, the amount of data should reach zettabytes, Davis said.
The next "killer" application for multicore computing could be tools that enable the real-time collection, mining and analysis of data, Davis said. For example, military personnel using wearable multicore computers are able to simulate, analyse and synthesise data in real time to show how a situation will unfold. Doing so is viable and doesn't create risk for military personnel, Davis said.
"These types of applications have taken weeks to do ... now these types of applications are literally running in minutes," Davis said.