Much of the talk about cloud computing in data centres has been about enterprise workloads, but high-performance computing (HPC) customers are getting some options as well.
HPC specialist Platform Computing released some new tools for customers looking to build a "private cloud", or a data centre in which workloads can be transferred from one pool of servers to another to improve utilisation rates and application performance.
Organisations doing HPC work, such as aircraft design or oil and gas exploration, often employ multiple clusters, each assigned to a different application with its own unique operating requirements, such as a particular Linux stack.
Cluster utilisation rates are typically no higher than 40 percent, and customers aren't using the surplus capacity in part because their HPC applications can't be moved easily from one cluster to the next, according to Martin Harris, director of product management at Platform Computing.
The company's new product, Platform ISF Adaptive Cluster, can dynamically change the software stack on a cluster node so that workloads can be moved between clusters, allowing customers to get higher utilisation. They need to also be using one of Platform's existing workload management products, Platform LFI or Platform Symphony.
"LFI knows the intricate details of the workload itself, so it asks LSF Adaptive Cluster when it needs 10 more Windows nodes, for example. LSF Adaptive Cluster reaches out to the Linux cluster, repurposes those machines [as Windows nodes] and hands the IP addresses back up to LSF so the applications can use them," Harris said.
Most HPC environments aren't using virtualisation yet, he said, so repurposing the node often involves rebooting the server and installing another software image locally or from the network. But by moving workloads between customers, Platform claims that customers beta testing the product have improved utilisation to 80 percent or higher.
The company has worked with Microsoft to let LSF Adaptive Cluster provision nodes running Windows HPC Server 2008, as well as the main Linux distributions, Harris said. The product is available immediately, though Platform wouldn't disclose pricing.
The company also announced the availability of Platform ISF, a cloud-management platform that has been in beta since it was announced in June. Aimed at both HPC and enterprise environments, it allows a mix of physical and virtual servers to be managed as a single resource and made available to end users through a self-service interface, with the ability to charge business units for the resources they use.
Since announcing the product in June, Platform has added a "cloud bursting" capability to let organizations off-load work during peak hours to a public cloud service such as Amazon Web Services. The work is off-loaded to the public cloud based on user-defined thresholds such as CPU and memory usage.
With AWS, for example, customers package their application and LSF components into an Amazon Machine Image so that it can be uploaded to the Amazon cloud. Platform is working with individual cloud providers, starting with AWS, to create service definitions that allow customers to track, meter and bill for the services used.
Platform ISF is priced on a per-socket or per core basis for each node under management, though the company wouldn't disclose pricing for that either.