European particle physics organisation, CERN, is embracing server virtualization and cloud computing technology to improve CPU utilisation and the delivery of computing resources to scientists around the world.
CERN, which uses Red Hat's version of the Xen hypervisor as well as Microsoft's Hyper-V, has just installed private cloud software from Platform Computing to automate the process of managing virtual infrastructure. Virtualisation and advanced management tools will help deliver compute cycles more efficiently to 10,000 researchers from 85 countries.
"It will greatly facilitate our ability to deliver resources to the users," says Tony Cass, group leader for fabric infrastructure and operations at CERN. "Users always want more CPU cycles so they can evaluate one more scenario or try out one more thing. The more cycles we can squeeze out of the fixed amount of resources we have, the more physics they can do."
Platform has billed its Platform ISF software as a "private cloud" tool that aggregates servers, storage, networking tools and hypervisors to create a shared pool of physical and virtual resources. An announcement from Platform Computing credits the software with helping CERN build "the world's largest cloud computing environment for scientific collaboration."
Cass says he doesn't actually view the project as a cloud initiative, but says it improves the institute's grid network by making better use of virtual resources. For example, Platform ISF makes sure the proper amount of resources is dedicated to different applications, such as different versions of an operating system. Platform also determines which virtual machines are placed on particular pieces of hardware, making sure that enough network bandwidth is allocated to each VM, and ensures that VMs are taken offline once they are no longer needed.
Platform also automates the process of live migration, allowing VMs to move from one physical box to another without being turned off, Cass said.
So far, CERN is running a few hundred VMs on the Intel-based x86 servers that make up its batch environment, which serves the scientific community. CERN could potentially have 60,000 or moreVMs running batch jobs in the future, however, Cass wants to aggressively move batch jobs to VMs over the next year, but the rate of adoption depends partly upon user acceptance.
With the Large Hadron Collider having just recently come online, "people will be wary of too many changes in the computing facilities we're providing, but if we can demonstrate that virtualisation is perfectly safe there's no reason we wouldn't migrate most of the batch [jobs to VMs] by the end of next year," he said.
Users won't have a self-service interface to the extent of what is available on Amazon's Elastic Compute Cloud, but will be able to choose between a few preconfigured software stacks including an operating system, compiler and other software.
CERN, which also uses a previous product called Platform LSF, a grid workload management software, hopes its latest undertaking will improve system utilization by about 15 percent or 20 percent.
CERN has to process and distribute more than 15 petabytes of data to researchers per year, all in near real time, and has 60,000 CPU cores to manage the load. Ultimately, Cass says CERN may use Platform ISF, the new product, to manage "every single machine we've got."
Find your next job with techworld jobs