Virtualising x86 infrastructure isn't just a one-step process - as servers change, the whole data centre must change as well. While server hypervisors such as VMware's ESX, Microsoft's Hyper-V and Xen can make IT more efficient and cost-effective, many of the virtualisation advantages can be cancelled out when data centres rely on technology and processes that haven't been updated for the virtualisation age.
In our series of stories, we'll look at three of the "burning questions" related to server virtualisation, namely the impact on storage systems, the risk of VM sprawl, and security risks.
Burning question: How can IT shops reduce server virtualization's impact on storage?
Virtualising servers without adapting physical storage systems to the unique needs of virtual machines is a kiss of death for any virtualisation project. In addition to consolidating five or 10 applications onto a single server, virtualisation tools from the likes of VMware do "magical things" such as instantly move workloads from one running server to another, and replicate VMs for disaster-recovery purposes, notes analyst Arun Taneja of the Taneja Group.
All of this requires a larger storage buffer for "resume and suspend space", even if that extra space remains unused the majority of the time. In the past VMware has simply recommended that customers double storage capacity or at least significantly increase it, Taneja says. If a customer's storage utilisation was at 40%, a typical rate, then in the virtualisation world that could drop to 20% and storage efficiency has been chopped in half.
The key to solving this problem is to use thin provisioning, a feature enabled by storage virtualisation. Physical storage typically allocates one storage volume to one application only, and if that application doesn't use all the storage then it will simply be unused. Thin provisioning, on the other hand, pre-allocates a larger amount of storage to applications than is actually available to them, allowing applications to share a pool of storage and use only what they need, when they need it.
With such technology, available in newer versions of VMware's virtualisation software and from storage vendors such as 3Par (just acquired by HP), NetApp and Compellent, customers can increase data utilization to 80% and be even more efficient than they were prior to virtualising, Taneja says.
While thin provisioning solves the utilisation problem, it does not address the dreaded "I/O blender effect." As VMs are added to a physical server, the server and connected systems have to handle more I/O patterns. The order of the input/output operations is randomised, potentially harming the performance of each application.
One key to solving this problem, Taneja says, is "wide striping" technology, which distributes the I/O load across many disks, rather than just one or a few. Wide striping can help eliminate storage bottlenecks caused by multiple VMs residing on the same physical server, and thus allow higher VM densities.
Taneja says the storage industry was "like a deer in front of headlights" when the data access problems caused by virtualisation emerged. But much progress has been made in the past 12 months, he adds. While the likes of 3Par and Compellent were on the cutting edge, established vendors such as EMC and Hitachi Data Systems are now following suit and incorporating at least partial storage virtualisation into their existing product lines through firmware upgrades, Taneja says.
At Brandeis University in Massachusetts, which has virtualised nearly all of its workloads with VMware and Xen, director of networks and systems John Turner says he went with a Compellent storage system largely because of its thin provisioning and snapshot technology, which greatly reduces the amount of data needed to store virtual machine disk files.
"Storage is a huge burden on virtualisation," Turner says.
Find your next job with techworld jobs