VMware vSphere 4.1 review: Welcome improvements to VMware cloud OS
VMware's vSphere 4.0 cloud operating system, which we tested last June ushered in new methods to manage virtual machines on internal and external hosts. The 4.1 version, which shipped last month, delivers some much needed polish. Additions to the product include an updated vCenter Configuration Manager (formerly EMC's Ionix Application Stack Manager and Server Configuration Manager) as well as vCenter Application Discovery Manager (formerly EMC's Ionix Application Discovery Manager).
Prices now range from the free basic hypervisor to Enterprise Plus at $3,495 per processor. Of the features tested, we found vMotion to have the most immediate effect for administrators, although those trying to cram as many VMs as possible onto a physical server will find newly reduced memory overhead and memory compression options to be highly desirable. After all, virtualisation is all about optimisation.
VMware 4.1 contains a sorely needed feature, the ability to use vMotion to move more than one VM at a time from server host to server host. VSphere 4.1 allows several VMs to move concurrently, but with a small catch.
The catch is that the source and target machines still need to be similar to each other in terms of processor type, and there needs to be a connection between the machines with reasonable speed — the two Gigabit Ethernet jacks found on most servers will do the job for at least four concurrent VM moves, we found in testing. With a 10GB switch, enterprise customers can expect to be able to move eight machines at once across a VMware cluster.
These improvements address the issue of how to quickly get production virtual machines off a failing hardware platform. When hardware sends alarms that problems are occurring, maintaining production requires moving the dense number of operating systems instances to another platform rapidly, and eight VMs at a time seems to be a good number.
In the not too distant future, however, the number of CPU sockets and the multiple CPU cores filling servers will lead to organisations cramming even more instances into them, thus elevating the need for still more instances to be moved quickly. And as servers become crammed with more instances, the need for load balancing CPUs and resources becomes more critical. The ability to move groups of VM assets from one server to another goes a long way towards maintaining peak performance from the multi-core servers now popular in network operations centres and data centres.
How we tested
We tested VSphere 4.1 using HP DL580 (16 Intel cores) and HP DL585 G5 (16 AMD cores) servers, along with Dell 1950 servers (eight Intel cores, lots of memory) on a switched Gigabit Ethernet network, that's, in turn, peered at 100Mbps, located at nFrame in Carmel, Ind.
We tried to upgrade from vSphere 4.0 to 4.1, and found problems that were rapidly remedied by VMware's support team. We tested several of vSphere and vCenter's new features, then embarked on a mission to test the new multiple concurrent vMotion virtual machine transfer capabilities. We found vMotion to be limited, running sequentially rather than concurrently. After much tuning (and aid from VMware tech support), we were able to sustain four VM transfers using vMotion. Had we used a 10GB switch, the full eight concurrent moves may have been possible, we speculate.
By tracking with vCenter, we tested claims that overhead memory use for discrete VMs was more efficient, and were able to verify this but only for 64-bit VMs.