Caches have been used throughout history to temporarily store valuables not in use at a given time. From storing a summer harvest to eat through the winter, to keeping skis or bicycles in the basement during the off season, caching helps us optimise scarce storage space.

Today, it is estimated that more than 2.7 zettabytes of data is stored around the world, but not all of that data is essential all the time. In datacentres, virtualisation and caching have become widely implemented to accelerate application performance for our most critical data to make efficient use of IT resources.

It is estimated that more than 50 percent of data centre servers are now virtualised. With more companies virtualising every year, caching is helping businesses maximise efficient performance in virtual environments and run more guest virtual machines (VMs) per host than ever before. As any virtualisation admin knows, virtual environments often experience what is referred to as the I/O blender, which forces sequential writes to become random as multiple applications are writing to storage at the same time. This can create performance challenges in traditional storage environments. Caching overcomes much of the I/O blender hurdle, giving your VMs the boost they need.

Today, there are many ways enterprises are adopting virtualisation and caching into their environments. Let’s take a look at the various stages in virtualisation to determine how enterprises can future proof their datacentre investments.

A Recipe for Success

As CPUs become more powerful, datacentres without virtualisation are at risk of becoming increasingly inefficient if they continue to rely on yesterday’s storage solutions. When data is stored across a network on a traditional storage array, latency is added as applications wait for the information they need to continue working.

Let’s examine this in an analogy: Making a stew for dinner. Without caching ingredients - or data - in your pantry, we would be essentially driving to the store a separate time for each ingredient. In our datacentres, this means added latency for applications if they have to travel to the back end data store every time they need an ingredient.

Virtualising servers also means virtualising and sharing their underlying storage. Furthering our analogy, this is like having all the ingredients for our stew warehoused in our backyard. Overly large capacity required, need to protect it at each location rather than centralised, and still having to go to it with rather small requests limits the benefits we can realise from virtualisation overall. Caching data on the server helps deliver application performance for virtualisation as fast as it is needed by the application.

Bare Metal Servers Bearing Cache

Step one in the evolution towards full virtualisation often begins with caching data on bare metal, or physical servers. In our kitchen, bare metal caching is similar to making a grocery list and purchasing all the necessary ingredients at the market at the same time. We still have to make a trip to the store, but we only have to make one trip and one purchase.

Configuring bare metal caching solutions requires minimal work and can easily be incorporated into existing infrastructure, extend the life of an existing system. By offloading the majority of reads from the SAN, and moving them to your cache, the lifespan of an existing environment can be greatly extended.

Bare metal caching is a simple architecture upgrade that provides an immediate application performance boost and reduction in SAN workload. However, to take advantage of certain software features in virtual environments, like VMware vMotion, cache implementations must be moved up the stack. Ultimately, the closer the cache is to the application, the faster the application will run.

Hypervisor Caching for Virtualisation Features

Caching on the hypervisor level of the stack allows virtualisation applications to utilise features like VMware vMotion. These types of features require building caching into the hypervisor layer of the stack. This is in addition to features found on bare metal caching, such as the ability to overcome the I/O blender.

Hypervisor caching is like having a pantry in our kitchen. While making our stew, we can plan ahead and have most of our ingredients close by. On occasion, we may need to make a run to the store for a special ingredient, but in general our pantry has most everything we need.

Caching on the VM Guest
The newest and most effective way to accelerate virtualisation environments is caching on the guest VM. This delivers the ability to virtualise more guests per host than any other solution currently available. The higher VM density is made possible by keeping data as close to the application as possible.

Guest VM caching is like having all of the ingredients on the counter ready to go into the pot. This reduces latency even further since you don’t have to walk back and forth between the pantry and the fridge. Like on the popular Iron Chef TV show, having all the ingredients right in front of you allows the dish to get cooked much faster.

Guest VM caching delivers the most effective understanding of application requirements. For example, files that are very large or not regularly used, such as log files, can be sent to shared storage. Meanwhile, database tables are cached on the VM level, closest to the application, which delivers much faster access to the critical data.

While hypervisor or bare metal caching can accelerate virtualisation, caching at the guest VM is much more effective at delivering the intelligence needed to minimise cache misses and reduce real-world latency.

Choosing the Right Caching Solution

No longer do companies have to sacrifice performance when implementing virtualisation solutions. In fact, many times, it is possible to achieve better performance through virtualised infrastructures using powerful flash memory caching solutions.

Contemplating virtualisation at any level can seem challenging as IT professionals purchase for today while planning for the future. Finding a unified solution that delivers caching capabilities on any of these three levels - bare metal, hypervisor or guest VM - ensures you can achieve the caching capabilities you need right now while maintaining the ability to seamlessly upgrade as needs evolve at your enterprise.

Bruce Clarke is the Vice President of Virtualisation Solutions at Fusion-io and oversees the development of products for virtualising any type of workload.

Enhanced by Zemanta

Find your next job with techworld jobs