So you've rolled up your sleeves and virtualised your data centre or server room to wring more performance out of your existing hardware - and hopefully save a few bucks on power and cooling in the process. The challenge, though, is figuring out whether your virtualised environment is performing as efficiently as it could. Whereas some planners have found it necessary to cobble together homegrown test methodologies for virtualisation workloads, the standard bearer has been VMware's VMmark benchmark.

For some IT admins, though, VMmark hasn't proven sufficiently practical or easy to run as they might've like. Plus, it can be tough to get past the fact that though VMmark is billed a vendor neutral, it comes from a vendor with a vested interest in scoring well in virtualisation tests. But now there's a new benchmark on the block that might prove a more desirable alternative to VMmark: SPEC has unveiled SPECvirt_sc2010, designed to assess the performance as well as power efficiency of various types of workloads in a virtualised environment.

The benchmark, according to SPEC, measures the end-to-end performance of all system components, including the hardware, virtualisation platform, and the virtualised guest operating system and application software. Taking into account that workload types vary from organisation to organisation (or from machine to machine within an organisation), the benchmark supports three types of test apps: a web workload based on SPECweb2005; a Java application server workload based on SPECjAppServer2004; and an IMAP workload based on SPECmail2008.

The workloads are injected at different time periods during the benchmark run, representing the spikes experienced in real-world server environments. Additional sets of virtual machines are added until the overall throughput reaches a peak or workloads fail to meet required quality of service criteria. The test takes around three hours to complete at its default settings, according to SPEC.

Beyond measuring raw performance, the new benchmark can measure power performance - comparable to miles per gallon for a vehicle - for a total system or just a server. Those tests draw on SPECpower_ssj2008 for power measurement.

Neutrality at a price

Among SPECvirt_sc2010's top selling points may be the fact that SPEC is a non-profit organisation with representation from an array of tech companies, including VMware, HP, IBM, Intel, and AMD (among others), all of which contributed to the development of the benchmark. Thus, it's easier to assume that the outcome of tests using the benchmark aren't biased toward one vendor or another.

Notably, VMware offers a virtualisation benchmark called VMmark, which the company asserts is vendor-neutral and has garnered a degree of support from the industry, despite initial questions of fairness. VMmark has also earned a reputation for being difficult to work with and for taking too long to run, as well as for onerous test bed hardware requirements. VMmark's system requirements include one server with two CPUs, 6GB of RAM, 80GB of disk space, and a 1Gb NIC; one client PC for every tile, each with two CPUs, 2GB of RAM and 15GB of available disk space, and a private network for connecting clients to the server.

SPECvirt_sc2010's system requirements are fairly vague: "A [server] running virtualisation technology; one or more client systems to act as the controller and/or load drivers for SPECvirt_sc2010; [network] connectivity of at least a 1Gb switch between systems in the testbed," and "the server must include stable and durable storage."

Interested IT planners should also consider the price tag: VMmark is free, whereas SPECvirt_sc2010 costs $3,000 to license.

More information and initial benchmark results are available at SPEC's website.