Nick Turnbull, EMEA sales director, Marathon Technologies, examines the evolution of virtualisation, and what this means for companies of all sizes in future.
It was back in 2004 after an O’Reilly media conference in the US that the first use of Web 2.0 really began to stick. Since then the technology world can’t get enough of everything ‘2.0’. No part of the technology world is safe from 2.0-ism, and certainly virtualisation, the latest golden child of computing, is no exception. We may or may not like it, but with more and more proponents backing it, Virtualisation 2.0 seems to be here to stay.
IDC was the first to use the phrase Virtualisation 2.0 in late 2006, but it hasn’t been until recently that it started becoming more of a talking point. In particular, Gartner’s recent ‘Top 10 Strategic Technologies for 2008’ named Virtualisation 2.0 as number five on the list. The question has to be asked, with businesses still coming to grips with virtualised environments, and promising new virtualisation vendors producing solid alternatives to the market-leaders, are even we ready for virtualisation 2.0? Have we actually had our fill of virtualisation 1.0?
Perhaps a closer look at what exactly Virtualisation 2.0 is would help. Going back to the beginning, virtualisation is a broad term that refers to the abstraction of computer resources. According to Wikipedia, virtualisation is “a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources.”
When we look at Virtualisation 2.0’s component parts, it still seems to fit very much into this description. So where does the difference lie? Virtualisation 1.0 is about using virtualisation for server consolidation. Virtualisation 2.0 is about dramatically broadening the use cases of virtualisation which will hasten the adoption rate of the technology as a whole.
Whoever you listen to, there seem to be several central characteristics of Virtualisation 2.0; application mobility, fault tolerant level reliability, desktop virtualisation, and an increased focus on management costs rather than development costs.
Application mobility and availability are at the heart of Virtualisation 2.0. Previously, hardware maintenance of any sort included at least some planned down-time of applications, be it ten minutes or ten hours, whilst maintenance took place. Virtualisation allows for the virtual portability of applications between hardware resources with no break in service to the end user. This application mobility has the potential to have a massive impact on application availability and almost totally negates the need to plan any downtime.
So what of unplanned downtime? The second core characteristic of Virtualisation 2.0 highlights the need for fault-tolerant availability of applications. There have been solutions for system recovery and even a level of server failover within virtualised environments, but these solutions are still failing to hit even business critical levels of 99.9 percent up-time. These approaches are problematic when you consider that virtualisation results in multiple applications now are on one server. In order for virtualisation to become truly pervasive, it must be reliable enough to be used for any and all applications right up to the most mission-critical ones.
Unlike live application mobility between physical servers, there is currently no resolution to the fault-tolerance challenge, and this is clearly a barrier to achieving true Virtualisation 2.0. This said, there are answers to the problem on the horizon, solutions that are likely to take virtualisation availability from its current ”ping and pray” reliability to true fault tolerant-class reliability.
Find your next job with techworld jobs