Anyone who has worked in the IT industry will know that a lot of time is spent coming up with marketing messages on potential new technologies, and then even more more time is spent clearing up all the confusion that has been generated.
This is even more the case in the storage industry where a good deal of any consultant or vendor's time is a great deal of my time is spent simply explaining to end users what all the terms we use mean and,. more importanly, where each piece of technology actually fits. A prime example is the word virtualisation. So many companies now have a virtualisation strategy (often meaning different things) that it seems to have ceased having any technical meaning and has become a part of marketing jargon: does the phrase still have a place in today's storage world?
So, what do we mean by virtualisation - quite simply the idea is that we are talking about a virtual or imaginary object rather than a real or physical one. So virtualisation is the process of taking real or physical objects and using them to give access to virtual or imaginary objects - its about flexibility.
Taking a nice non IT demonstration of the principle wouldn't it be great to own a virtual car ? So that when you get up in the morning you could jump into an open top sports car, because its a nice sunny day and you are just popping out for a drive, or jump into a pickup because you have a lot of stuff to take to the tip, or jump into an MPV because you are doing the school run. If you're rich (and not too environmentally conscious) you would own three cars. The cheap way of doing this would be to share three cars with three neighbours and then, crucially, have some management mechanism for allocating the resources actually needed each time you want to use your virtual car.
Virtualisation in the Storage Market
So, when we are talking about storage virtualisation we are talking about taking our physical disk spindles, tape drives, and other storage media, putting it all into some sort of pool, and then providing someone with the storage resource they actually need when they need it. Now, of course in some senses we have had simplistic virtualisation for quite a long time. In the disk world we have RAID, implemented in either hardware or software, whereby the operating system sees a physical disk or of a certain size, performance, and availability, but in fact we are actually using a set of disks and striping them, maybe with parity, or mirroring them, and if the result is more space than we want maybe partitioning the result and just showing a virtual disk of the needed size. Very often you may have the ability to add more spindles on the fly to increase the size of the virtual disk. For the older systems, this was all fairly manual - the operator had to build up and break down the physical disks into logical space allocated to servers. In some cases, this might even have been a multi-stage process of building up and splitting down and building up again and splitting down again. With some of the newer storage arrays this virtualisation has gone a step further and quite literally you do not know where your data really exists within the array - you just tell the array to create you a volume of such a size and with certain performance and availability levels. This idea is not restricted to disk, for tape systems we have RATE and RAIL and the likes, and we have back systems that can treat space on disk as a virtual tape drive so allowing near line backup capabilities. We then build up to the area of business continuity, disaster recovery, and mirroring between arrays, making snapshots, and the likes. In these cases again we are generally looking at a logical piece of storage which is in fact replicated between arrays. This is just another level of virtualisation. In reality, we have been working with simplistic virtualisation for some time, using operating system capabilities, hardware controllers in the server, external capabilities of large disk arrays. However, what most people mean by virtualisation is taking this idea of pooling to another level.
In the storage network where we have many connected servers, storage arrays, and tape systems, in a very real sense when allocating storage it would be nice to abstract even further. So that the operator simply defines the budget and service level of the storage he wants, and some agent looks at all the storage on the SAN and presents him with what he needs. The funny thing is that for at least four years there have been devices, appliances, systems, that do exactly this. For example a company called RaidPower, later to be known as Storage Appliance and resold by Dell as a PowerVault something or other, and the company was then purchased by HP prior to the HP/Compaq merger. This box , which was certainly being sold at least four years ago sat in the middle of the SAN, and to all the servers looked like a storage array, and it took all the arrays and allowed you to mirror and stripe across arrays to provide to the servers what they actually needed. There have been several other products, some software ones where you built the appliance yourself, some purchased as appliances just like the RaidPower. These systems have never achieved more than a niche market before now - though the vendors in question would be upset by such a statement. Why is this ? On the one side the virtualisation message has been one of hide the complexity and use the servers and storage you want.
The reality is that the waters have always been a bit muddy when it comes to formal qualified solutions in a market where in practice the server and storage OEMs have strict rules on firmware levels, patch levels, san topologies. On top of this there have always been concerns about all my data going through an appliance - both on availability, performance, and scalability. So you find lots of discussions on in the data path and out of the data path systems. There have also been a number of other announcements between various hardware and software players.
However, it is not all doom and gloom. In the tape world both STK with the SN6000 and IBM VTS have quietly been selling tape virtualisation products which can hide the reality of what you tape libraries are give you flexible access for backup and restore.
More importantly though, several of the OEMs have announced that they are working with one or more of the established fibre channel switch vendors on bringing out virtualisation technology. This OEM focus on virtualisation will make all the difference between a solution of technical interest and something that end users will be prepared to literally bet their business on. If Humpty Dumpty falls off the wall, who will be able to put him back together again.
Has the term lost its meaning?
There is no doubt that there is some sceptisism and it could be argues that many people have lost confidence in virtualisation . It is certainly true that the most end users if asked would probably struggle to identify which vendors sell virtulisation products today, even though there has been general market hype, even though they could all identify the names of the FC switch vendors and at least some of the storage management products.
Also, many end users are just a little worried about virtulisation and the danger that if something goes wrong they will not be able to put their data back together again. Many end users have openly said that their are far higher things on their priority list like storage management - making it easier to do day to day jobs like allocating capacity to servers which today involves using storage, switch, and server management tools seperately. Of course the virtualisation vendors will say that virtualisation is a vital part of storage management to such an extent that the boundary between the terms virtulisation and resource management have blurred. It is very important for us to realise that we have been virtualising our storage for quite some time now. However, it is only now with the OEM backing that we are starting to get a real focus on deliverable functionality that can help storage administrators rather than a confusing muddle of technologies. There are some challenges still to overcome, the first being that the sort of virtualisation described hear really only help with people with many storage arrays and indeed where all of these arrays are connected to the same storage networks and therefore a larger network than most end users have today. The biggest challenge though is the very same things that is moving virtualisation into a deliverable technology - the OEM. By definition, a big advantage of virtualisation is virtualising across heterogeneous storage, virtualising across cheep arrays from OEM A and expensive powerful arrays from OEM B, migrating data from one OEMs platform to another, replicating between heterogeneous platforms.
For all that the OEMs talk about heterogeneous, and indeed in some areas support it, they are a long way from being happy with the level of heterogeneity that customers may consider a key benefit of virtualisation. From a sales perspective, the OEMs usually will support 3rd party backup devices, after all most OEMs are either well known and established for selling disk, or tape, but not both. In some cases the OEMs will support different storage systems from within their own range, but when it comes to supporting multiple vendors disk, this typically means some form of additional consultancy or premium support offering if they will do it at all.
Comparing the complexity of the SNIA open SAN support matrix with the normal OEM support matrix you will see just how much limitation is imposed in a hetrogeneous SAN compared to single storage platform SAN the OEMs normally support. Customers in reality have different storage platforms, but keep them in separate SAN islands in order to ensure that they have support from the suppliers - this is clearly a compromise that limits the possibilities for virtulisation.
Find your next job with techworld jobs