Operating systems and software applications will struggle to cope with the rising number of processors embedded in each new generation of CPU, analyst house Gartner has warned.

"The relentless doubling of processors per microprocessor chip will drive the total processor counts of upcoming server generations to peaks well above the levels for which key software have been engineered," said Gartner in a research report. It warned that operating systems, middleware, virtualisation tools and applications will all be affected.

It says this will impact organisations, which will have to face "difficult decisions, hurried migrations to new versions and performance challenges as a consequence of this evolution."

"Looking at the specifications for these software products, it is clear that many will be challenged to support the hardware configurations possible today and those that will be accelerating in the future," said Carl Claunch, VP and analyst in a statement. "The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it."

Software it seems is struggling to keep pace with the rapid growth of multicore processors, but also from the fact that each core gets more threads as well, which is compounding the issue. CPU chips with two and four cores per processor are relatively common now, but high-end servers are also now featuring anywhere from eight to 32 cores.

And Claunch said that organisations will get double the number of processors in each chip generation, approximately every two years.

"In this way a 32-socket, high-end server with eight core chips in the sockets would deliver 256 processors in 2009," the report stated. "In two years, with 16 processors per socket appearing on the market, the machine swells to 512 processors in total. Four years from now, with 32 processors per socket shipping, that machine would host 1,024 processors."

Claunch warns that there is a hard limit of the total number of processors supported in all operating systems and virtualisation products. "In many cases, these limits are near the maximum server configurations available today."

Hard limits are often documented by the vendor or creator of the product and are therefore relatively easy to discover, but Claunch says that soft limits are uncovered only from word of mouth, real-world cases. "They are caused by the characteristics of the software design, which may deliver poor incremental performance or, in many cases, yield a decrease in useful work as more processors are added," he said.

He argues that to increase the scalability of software so that it can cope with more processors, is a slow and difficult task, and "software will struggle to keep up with the expansion of server processor counts."

Last week, researchers at Sandia National Laboratories found that too many multicore chips could actually cause performance problems. They discovered that going from two cores to four significantly improved performance after doing simulation tests. However, going from four to eight barely improved performance, and doubling it again to 16 multicores actually dropped performance to two-core levels. The bottleneck, they said was the memory bus.

Last year, a senior Intel executive warned that while multicore chips would help meet growing computing demands, it could create more challenges for programmers writing code. Indeed, the warning that software developers are ill-prepared for the world of multicore processor has been echoed for a number of years now.

So what to do? Well, Claunch offers some practical advice for organisations. He believes that IT managers should be prepared to run with a wider range of operating system releases in production, because organisations installing bigger servers may be forced to run the latest software, despite the migration strategies for their other servers.

He also says that organisations need to carefully evaluate the hard and soft limits to scalability of important software that they are planning to deploy, in order to make sure it will work as expected on intended hardware platform. He also thinks hard partition could be a way of overcoming the limitations of many virtualisation hypervisors.