Nick Turnbull, EMEA sales director, Marathon Technologies, examines the evolution of virtualisation, and what this means for companies of all sizes in future.

It was back in 2004 after an O’Reilly media conference in the US that the first use of Web 2.0 really began to stick. Since then the technology world can’t get enough of everything ‘2.0’. No part of the technology world is safe from 2.0-ism, and certainly virtualisation, the latest golden child of computing, is no exception. We may or may not like it, but with more and more proponents backing it, Virtualisation 2.0 seems to be here to stay.

IDC was the first to use the phrase Virtualisation 2.0 in late 2006, but it hasn’t been until recently that it started becoming more of a talking point. In particular, Gartner’s recent ‘Top 10 Strategic Technologies for 2008’ named Virtualisation 2.0 as number five on the list. The question has to be asked, with businesses still coming to grips with virtualised environments, and promising new virtualisation vendors producing solid alternatives to the market-leaders, are even we ready for virtualisation 2.0? Have we actually had our fill of virtualisation 1.0?

Perhaps a closer look at what exactly Virtualisation 2.0 is would help. Going back to the beginning, virtualisation is a broad term that refers to the abstraction of computer resources. According to Wikipedia, virtualisation is “a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources.”

When we look at Virtualisation 2.0’s component parts, it still seems to fit very much into this description. So where does the difference lie? Virtualisation 1.0 is about using virtualisation for server consolidation. Virtualisation 2.0 is about dramatically broadening the use cases of virtualisation which will hasten the adoption rate of the technology as a whole.

Whoever you listen to, there seem to be several central characteristics of Virtualisation 2.0; application mobility, fault tolerant level reliability, desktop virtualisation, and an increased focus on management costs rather than development costs.

Application mobility and availability are at the heart of Virtualisation 2.0. Previously, hardware maintenance of any sort included at least some planned down-time of applications, be it ten minutes or ten hours, whilst maintenance took place. Virtualisation allows for the virtual portability of applications between hardware resources with no break in service to the end user. This application mobility has the potential to have a massive impact on application availability and almost totally negates the need to plan any downtime.

So what of unplanned downtime? The second core characteristic of Virtualisation 2.0 highlights the need for fault-tolerant availability of applications. There have been solutions for system recovery and even a level of server failover within virtualised environments, but these solutions are still failing to hit even business critical levels of 99.9 percent up-time. These approaches are problematic when you consider that virtualisation results in multiple applications now are on one server. In order for virtualisation to become truly pervasive, it must be reliable enough to be used for any and all applications right up to the most mission-critical ones.

Unlike live application mobility between physical servers, there is currently no resolution to the fault-tolerance challenge, and this is clearly a barrier to achieving true Virtualisation 2.0. This said, there are answers to the problem on the horizon, solutions that are likely to take virtualisation availability from its current ”ping and pray” reliability to true fault tolerant-class reliability.

Desktop virtualisation is something that has increasingly started to make headlines as a technology of the near future, and is also an aspect of Virtualisation 2.0 that is yet to be proved. The concept of desktop virtualisation is perhaps the pinnacle of a virtualised environment, in which the PC desktop no longer exists in the same physical space as the actual PC. It offers organisations an opportunity to turn virtual machines into the singular smallest unit of management, from a central perspective.

In many ways this style of environment has been on offer for some time, in the guise of thin-client computing. Desktop Virtualisation however offers a level of geographic flexibility combined with all the management benefits of traditional virtualisation that was previously unavailable. It potentially solves all the issues currently associated with a lack of central control of remote and home-workers, whilst incidentally raising the security discussion to a whole new level.

There is a natural knock-on effect to all this centralisation, which is a movement of focus away from production costs and into management costs. According to IDC, on average, virtualisation return on investment is quoted at around 25 per cent, most of which comes from hardware consolidation savings. This said, Virtualisation 2.0 automatically implies that Virtualisation 1.0 is already in place, and production costs are an aspect of the set-up stages of virtualisation. With so much of the emphasis for the future of virtualisation being on simplified management and desktop virtualisation, this shift to a focus on management costs is perhaps not surprising.

In a nutshell, this is Virtualisation 2.0. Are we any wiser? Are we ready for this approach? It seems clear that much of this development is already underway. Application mobility is already widely available within VMWare and XenServer. The recent acquisition of XenSource by Citrix points clearly to a future of virtualised desktops, and a wide range of other vendors are planning to offer such services. Indeed, all that’s needed for Virtualisation 2.0 to become a real possibility is a fault tolerant solution…so watch this space.

But do we need it, and is it real enough to own its own moniker in Virtualisation 2.0? During interviews in mid-2006, Tim Berners-Lee himself questioned whether the Web 2.0 term can be used at all in a meaningful way. According to Berners-Lee many of the technology components of "Web 2.0" have existed since the early days of the Web. This very much reflects the situation with virtualisation. The characteristics I’ve outlined above could easily be considered just another part of virtualisation as it already exists, rather than an entire generational shift.

In terms of usefulness however, being able to describe a set of virtualisation characteristics under a single name certainly is helpful, and places the stages of development within clear parameters. This technology is moving so quickly that we need to have a frame of reference to give ourselves an anchor within a sea of three letter acronyms, and Virtualisation 2.0 helps do that. Personally, I understand and appreciate its purpose, and I don’t believe it’s just more marketing spiel.