Storage virtualisation, regulatory demands and new applications are all pushing storage towards a service-oriented architecture, like SOA software, argues Hu Yoshida, vice-president and chief technology officer of Hitachi Data Systems. He says that as well as allowing storage to be pooled and shared, virtualisation can not only de-duplicate your data, it could de-duplicate your storage services too.

"Today you see a bunch of niche products - virtual tape libraries (VTLs), NAS, archiving and so on, all with their own islands of storage, and each with the same basic functions to perform," he says.

"We believe that virtualisation is the way to provide those storage services, such as mirroring and replication. Virtualisation means the application doesn't have to be interrupted for big data movement."

He gives the example of NAS supplier BlueArc, which HDS invested in last year. "BlueArc has its own storage, we worked with them to separate their NAS head from their storage and put our storage behind it instead," he says, adding that HDS has now done the same with Archivas, a digital archiving supplier which it is buying, and Diligent, whose VTL and de-duplication technology it resells.

"My vision is of offering up services to applications," he says. "It's not only services such as volume pooling, but we can do back-end replication for an application that doesn't have replication itself."

It's partly that offloading storage services onto high-powered hardware that's dedicated to the purpose - such as HDS's TagmaStore USP and NSP storage controllers - will make it possible for applications such as document management to scale up past their current limitations, he says.

But it also means that storage services become re-usable - like modular software or Web 2.0 elements going into a mash-up - and can be leveraged across multiple applications or combined in new ways.

"For example, Diligent can de-duplicate 250TB to 10TB, so our vaulting application just has to vault 10TB. The services could reside on systems in front of the storage controller, or on blades in our storage unit," Yoshida says.

That storage controller is central to the SOA storage concept however. He claims that only the approach chosen by HDS, which puts the virtualisation engine inside the storage controller, will make it feasible, and that the alternatives, which virtualise either in the servers or via an appliance in the SAN, present too many limitations.

"When virtualisation started five years ago, they were trying to do volume management in the network. That can be a problem though, because it runs on PCs, and if a cluster node goes down you're exposed," he says.

"The problem with virtualisation in the network is there's a lot of data to move around and log. In the control unit, you can do it. Also, most appliances have limited cache, and there can be availability problems if you lose the tables. We have a lot of cache, and a single image of that cache - our controller is 128 processors sharing one global image."

"To do services, you have to have shared global cache, and lots of it. We have 256GB, most PCs only have 4GB. Going to 64-bit processors makes it faster, you still have the updating problem though."

So will the concept of SOA storage fly? Yoshida says the question is redundant.

"The proof points are there already," he says. "We have done it with Diligent and Archivas, and now BlueArc too."