By 2015, Intel will have transformed computing thanks extraordinary new levels of performance, the company's research and development chief has claimed.

Its current forays into multi-core processors and virtualisation will evolve to more sophisticated technologies and lead to computer interacting with and predicting people, according to Justin Rattner, speaking at the Spring Intel Developer Forum (IDF).

"We want technology to become more natural, where we can have a conversation with a variety of information devices that populate our world," Rattner said.

To make this vision work, Intel needs to develop technologies that will usher in the "era of tera" that Rattner's predecessor Pat Gelsinger talked about at last year's Spring IDF. Hardware developers will need to build systems capable of processing terabytes of data on chips generating teraflops (one trillion floating point operations per second) of activity to enable a world of intelligent computing, Gelsinger said last year. As part of Intel's re-organization in January, Gelsinger now heads up Intel's Digital Enterprise Group and long-time researcher Rattner is in charge of technology development.

This year's IDF has been stuffed full of dual-core processor briefings and explanations of new virtualisation technologies built into processors and chipsets. In 2015, Intel wants to be capable of mass producing what Rattner called "many-core" processors with hundreds of processing cores, and to virtualise other parts of the computer such as graphics controllers or storage.

Intel is working on a new programming language called Baker that might help programmers take advantage of chips with hundreds of cores, Rattner said. It is testing the language on some of its networking processors that have multiple processing engines, and demonstrated how the programming language can sense changing workloads and allocate processing cores as needed, reducing the power consumption of unneeded cores.

Another demonstration showed how a PC could simultaneously run two different graphics-intensive workloads on a virtualised graphics controller. Each application believed it had full access to the graphics controller, and advances in hardware will allow for amazingly sharp graphics in three dimensions available for both applications, Rattner said.

Of course, all of these new technologies will require a steady supply of data from memory, or their full capabilities will never be reached. New packaging technologies such as stacked chips or stacked wafers would allow more data to flow back and forth between a CPU and a memory chip, Rattner said.

Intel's recent work in developing silicon lasers could also pave the way for optical interconnects on chips, another way that the transfer of data around a chip could near the speed of light, Rattner said.