At the International Supercomputing Conference (ISC) in Hamburg this week, Intel announced that all future products based on its Many Integrated Core (MIC) architecture would become part of its newly-named Xeon Phi product family.

The move is a clear attempt by Intel to capitalise on the growth of high performance computing (HPC) processor shipments over the last five years, by establishing a recognised HPC brand. However, more than 74 percent of the supercomputers on the Top500 list already have Intel inside.

For example, the “SuperMUC” supercomputer at LRZ in Germany – which delivers 2.9 PetaFLOPs of performance, making it the most powerful in Europe – runs on Intel Xeon E5 processors.  SuperMUC was ranked fourth overall in the 39th edition of the Top500 list of supercomputers.

So if Intel is already dominating the HPC industry, what is the need for Xeon Phi?

According to Raj Hazra, VP and general manager of technical computing at Intel's Data Center and Connected Systems Group, a supercomputer's position on the Top500 list is restricted more by money and power consumption than performance capability.

“There’s no reason we can’t build a number one system today on Xeon E5s. If they added a few more racks to the LRZ system they could have gone up a few more positions, so it’s not an architectural limitation,” Hazra told Techworld.

“As the FLOPs (floating-point operations per second) grow, however, the total system power grows as well, and if you continue to do that it becomes economically untenable. It’s not just about power, it’s a question of whether you can pay for that power every year of operation.”

Hazra said that performance-per-watt is currently the leading metric for the HPC industry. In the US there is a rough benchmark for the cost of power, which is $1 million per megawatt, so a data centre that uses 10MW will cost $10 million to run. SuperMUC consumes 3.4MW, which LRZ decided was the maximum configuration for their needs.

However, Intel is determined that performance does not stop here. The reason the company is developing Xeon Phi is to extend performance through parallelism, rather than simply making bigger and bigger versions of today’s systems.

“What Xeon Phi does is give you much more parallelism than Xeon, and therefore you get a better performance per watt. So with Xeon and Xeon Phi, in the future we have the ability to continue to grow overall performance, but not scale power along those same lines,” he said.

The larger systems get, the more unreliable they can get, due to the increased number of components that could fail. Another advantage of parallelism is that, by increasing performance density, the reliability of the system also improves, according to Hazra.

However, parallelism also brings challenges. For example, the more parallel resources you have, the more attention you have to pay to performance tuning. Compilers and libraries help to take that burden off the programmer, but the programmer still has a role to play.

Parallel program debugging can also be problematic, because it is very difficult to capture the condition at which you want to debug. However, Hazra believes that as compute capacity increases, people tend to innovate on more parallel algorithms, creating a virtuous cycle.