Virtualisation is widely recognised to be a game-changing technology, helping organisations to reduce their capital and operational costs, as well as improve underlying business processes. But the impact that this technology is having on the storage layer at the bottom of the stack is often overlooked.

This is something of a bugbear for Michael Heffernan, chief technology officer for virtualisation at Hitachi Data Systems. Heffernan explained that hypervisors have allowed organisations to pile more and more virtual machines (VMs) onto their servers, but as the load on these servers grows, storage is coming under increasing pressure.

“You only have two ways to scale: you scale deep, by buying a more powerful storage array, or you scale out, by adding more and more blocks and connecting them with a network,” Heffernan told Techworld. “Virtualisation technologies came from mainframe, and the thing about mainframe is you've got to have really big enterprise storage at the back end that scales deep.”

In spite of this, many companies are scaling out, because storage vendors are still predominantly producing low-end products that are designed for the SMB market. However, while adding blocks in a modular fashion might make sense for small businesses, it is not sustainable for massive enterprises with 20,000 VMs and 250 storage arrays, according to Heffernan.

With scale-out storage, VMs need to be copied between the different arrays, but some VMs can be as large as 2TB, so transferring them across a network can be very slow. When you multiply this thousands of times, the system often becomes unworkable.

Big enterprises need to scale deep

In scaling deep rather than scaling out, by adding storage underneath and managing it all from a single controller, a lot of the cost and complexity can be removed, according to Heffernan. However, this approach requires education of the market – something that virtualisation vendors are reluctant to do.

“Companies like VMware and Red Hat can’t educate too much, because the more they educate, the more they’ll hurt their ecosystem,” he said. “Companies like NetApp will struggle because they don’t have the technology to support scale deep storage. So the virtualisation vendors are in a balancing act.”

The upshot of this is that networks are becoming overloaded, increasing the risk of downtime. In a virtualised or cloud environment, downtime can be catastrophic, because when a server crashes it can affect hundreds of applications.

“The biggest challenge with network is you’ve got bandwidth, and you can keep adding bandwidth but there’s a limit, and then it can break,” said Heffernan. “Telecommunications companies are struggling around the world because no one has got any money to upgrade the networks. They are creating a bottleneck.”

Heffernan said that one company that is struggling with the scale-out approach to enterprise storage is Hewlett-Packard. Despite recently buying storage vendor 3PAR, the company still partners with Hitachi for large-scale enterprise storage, and the company's P9500 storage array is a simply a rebranded version of Hitachi's VSP enterprise storage array.

“HP have a bit of a challenge because, in all of the stuff that they've bought, from a storage perspective they still don't have the enterprise footprint. All of their products are small and medium business. So in the enterprise area they had to partner with Hitachi to get that part of the portfolio,” he said.

Having traditionally catered to a very niche market of military, finance and government organisations, Hitachi Data Systems is now seeing its customer base expand, due to the growth of virtualisation and Big Data. While Hitachi's storage products are expensive compared to scale-out storage offerings, Heffernan claims they are “bullet-proof”.

Back to basics

It is essential to match the technology with the hardware that it is running on, he said. So if an IT manager wants to run hypervisor technology, that organisation needs to invest in hardware that can support the hypervisor.

“When you're dealing with hypervisor technology – VMware, Red Hat, XenServer – they're all Linux-based hypervisors with clustered shared file systems. Those volumes need random I/O, and the only hardware that supports this is SSD, or what they call wide striping, where you wide stripe with thin provisioning. That's the only way to get random workloads.”

However, many organisations are still not doing this, according to Heffernan. Instead, they are running their hypervisor technology on any old storage and building software on top of it, to hold it all together. While software can do a lot of the work there is a limit, said Heffernan, and as the load grows, “something's got to give”.

“NetApp are digging themselves into a big hole at the moment because they're trying to sell a lot more software,” he said. “Technically speaking, I always try to go back to basics, just go back to being very simple, very economic, very green.”

The key challenge for chief technology officers today, according to Heffernan, is to simplify their data centre infrastructures. With vendors all trying to create their own markets based on their own technologies, it is up to these CTOs to push back and demand simpler solutions that do the job that needs to be done.

“When you're an infrastructure guy, which is pretty much most customers, you have to deal with a lot of different vendors,” said Heffernan. “You've got your mainframe sitting there working, you've got your Unix systems, over there you've get some Windows, and over here a little bit of VMware – it's just a mess.”

Some CTOs end up buying a converged solution and getting locked into a vendor contract, because everyone is telling them different things and they do not want to deal with the problem. But Heffernan says these integrated infrastructures can be more trouble than they're worth, because they mask incompetencies rather than fixing them.

Consolidation is key

Some of Hitachi's customers are now looking at consolidating their infrastructure onto just two or three platforms. Heffernan says that the hypervisor market is now mature enough to justify organisations simply choosing one and making it a standard platform.

“A very large company in Germany has already made the decision to have mainframe and VMware as their two platforms. They have even got rid of all their NAS. In other words they worked out that 95 percent of their apps could run on either one of those platforms,” he said.

“That's common sense. Most people have some web servers, some Citrix, some Windows, Microsoft Exchange – all of these miscellaneous things. If you are really practical and you've got some common sense, you can put all that stuff on VMware or Red Hat Enterprise Virtualisation. It might not be all the bells and whistles, but underneath it's robust, and it will stay up and running.”

Above all, organisations need to be wary of the hype around cloud computing, said Heffernan. Hitachi is currently working with Cisco to create a blueprint for a data centre infrastructure that is open and, unlike an integrated stack, does not lock customers in.

“I don’t like the cloud, I think it’s hot air. Because at the end of the day it’s a marketing term,” he said. “If you pull it to pieces what is it? It’s physical network, it’s physical storage and physical servers with some software. It’s just a big data centre, and the real question is, how do you get access to it, and how’s it copied from one place to another?”

Rather than simply buying equipment because vendors are telling them that they need it, CTOs need to start looking at their purchasing decisions within the wider context of their organisation, and considering how these technologies will affect their bottom line in the long run.