There are three and a half main players in storage area network (SAN) storage virtualisation: DataCore, HDS, IBM and EMC. DataCore's is a software-only model, running on industry-standard X86 hardware, either as a Windows app' or as a virtual machine in a variety of virtual server environments.
The other three are controller-based: IBM with its proprietary hardware SAN Volume Controller (SVC); HDS with its proprietary virtualising array controller, the USP-V; and EMC with its SAN fabric director-based InVista. The main three: DataCore; HDS; and IBM, have sold thousands of copies of their products. EMC has, apparently, only sold somewhere in the low hundreds which is unusual for such a usually energetic company.
The oddity of this is emphasised by DataCore's SANsymphony being available packaged as a virtual server application in EMC's VMware (as a virtual machine or VM) whereas InVista is not.
Techworld spoke with George Teixeira, DataCore's CEO, about the SAN storage virtualisation market and about DataCore's intentions.
Techworld: How do you view IBM's SVC?
GT: It's a proprietary hardware base for the software. They could move it to X86 hardware if they wanted to. IBM has lots of excellent engineers. SVC is a drag-along. They get money by selling their hardware. Most SVC sales are in to existing IBM accounts.
Our resellers that also resell IBM tell us that unless it's an existing IBM account they won't even propose it.
IBM seems to be in a position where it's hard to move forward. They still don't have thin provisioning in their product. This lateness is pretty apparent in the industry.
Techworld: How do you view EMC's InVista?
GT: There's no energy behind InVista. It hasn't got thin provisioning. I think their model is to keep the storage intelligence in the storage controller. InVista is more of a routing mechanism. It does routing but storage control is in the storage controller itself.
If you have DataCore then you can use commodity disk and don't need clever controllers. EMC's model is to sell more clever controller disk arrays.
Techworld: What's common amongst SAN virtualisers?
GT: Storage intelligence is an application. A SANsymphony or an SVC is multi-million lines of code. It needs a sophisticated O/S to run it. DataCore uses Windows. SVC uses AIX. EMC's Clariion has its own O/S.
The storage intelligence app and the O/S underneath need a computer. It could be a server, a blade or a virtual server - it just doesn't matter. For example, IBM puts SVC onto a blade that can run in a Cisco switch. Whatever you do, you need a pretty beefy CPU, memory and O/S.
A server is a server is a server. Give me processing; give me memory; give me an I/O channel; if you give me these components I can run your storage. The VM model is the right model.
We're all in-band now because you know you can control performance, you can cache on the path.
Techworld: Now that SAN virtualisation has pretty much all its bases covered by DataCore are you going to move up the stack?
GT: Yes we are. We are going to blur the lines between network-attached storage (NAS) and SAN. This is not the same as putting on a NAS head. We have customers doing that but you don't get integration.
Regarding storage virtualisation we are now where server virtualisation was three years ago. The virtual server people understand virtual storage better than hardware-oriented storage people.
Techworld: Our impression here is that Teixeira thinks that virtual servers are changing the virtual storage game strongly. In a virtual server world almost everything is an application, including SAN virtualisation.
DataCore's SANsymphony, SANmelody and Traveller products represent a storage virtualisation base with services layered on top of it: thin provisioning; snapshots; roll-back because Traveller has recorded all the writes.
These same services could be used by a file-based storage entity, NAS, as well as by DataCore's existing block-based entity, SANsymphony/melody.
If we view NAS as just-another-storage-application using base storage services layered onto a virtualisation layer running on an O/S inside a VM then it's logical, isn't it, to think of other higher storage stack level services that could be added alongside a NAS something.
What DataCore wants to do, I think, is to make VM applications' use of storage services easier and more seamless so that as a VM app is instantiated and adjusted and moved from one physical host to another the VM-packaged storage services, be they file or block-based, are instantiated, provisioned, protected, modified and moved alongside it.
These virtualised storage services are just-another-VM-app and just as flexible and just as efficient at driving up hardware utilisation as VMware itself. Also, just as effective at driving down (storage) hardware cost.
Where does this leave non-VM-packed storage virtualisation software? I think Teixeira thinks it leaves it in a dead-end, a hole from which it has to emerge and that will take a lot of time.
There is a new competitive arena opening up with the players being those disk storage suppliers that are closely integrated with VMware. That means that DataCore is now going to compete with virtualised iSCSI SAN array providers such as Dell's EqualLogic, Left Hand Networks and NetApp.
The company is also competing with enterprise storage array providers such as 3PAR as well as EMC, HDS and IBM. Its message here is that commodity X86 server processors are beefy enough to provide all the hardware resources required. You don't need proprietary hardware to get the performance anymore.
The core DataCore message has three elements to it: think Windows; think virtual servers; think storage services packaged as VMs.