Linux clustering software specialist Scali has added support for multi-core CPUs to its interconnect technology.
MPI Connect supports Red Hat Enterprise Linux and SuSE Enterprise Linux on x86, Itanium and Power and now comes with extensions that optimise performance of MPI-based applications running on multi-core CPUs.
The new version optimises parallel application communication over shared memory. That allows MPI processes to use the shared memory of multi-core CPUs to speed up communication performance between processes being hosted on an SMP system.
How it works
Jobs are distributed across processors at initial run time, assuring that resources are evenly distributed. Once a process is started on a processor, it is bound to that processor for the complete run so cycles are not wasted swapping processes between processors. The result of the changes is improved performance.
"As we continue to develop state-of-the-art innovations like AMD64 technology with Direct Connect Architecture and multi-core processors, we propel the technology sector forward and encourage the IT community develop software that can typically push the limits of CPUs," said AMD's server/workstation marketing manager Patrick Patla. "Technologies such as Scali MPI Connect help users to boost the performance of their applications and leverage the computing capability that multi-core AMD Opteron processors provide."
Scali CTO Hakon Bugge said: "This technology highlights the continued leadership that Scali MPI Connect provides in leveraging the latest processor advances to best meet the needs of performance driven applications. For nearly a decade, cluster compute resources have been two CPU systems with a single memory controller.
"With the advent of multi-core CPUs, four and more CPUs will be the foundation of compute engines in modern clusters. Scalis skilled team of engineers has made a unique product capable of exploiting technology advances, enabling our end-users to fully realise the power of modern multi-core processors."