Virtualisation is widely recognised to be a game-changing technology, helping organisations to reduce their capital and operational costs, as well as improve underlying business processes. But the impact that this technology is having on the storage layer at the bottom of the stack is often overlooked.

This is something of a bugbear for Michael Heffernan, chief technology officer for virtualisation at Hitachi Data Systems. Heffernan explained that hypervisors have allowed organisations to pile more and more virtual machines (VMs) onto their servers, but as the load on these servers grows, storage is coming under increasing pressure.

“You only have two ways to scale: you scale deep, by buying a more powerful storage array, or you scale out, by adding more and more blocks and connecting them with a network,” Heffernan told Techworld. “Virtualisation technologies came from mainframe, and the thing about mainframe is you've got to have really big enterprise storage at the back end that scales deep.”

In spite of this, many companies are scaling out, because storage vendors are still predominantly producing low-end products that are designed for the SMB market. However, while adding blocks in a modular fashion might make sense for small businesses, it is not sustainable for massive enterprises with 20,000 VMs and 250 storage arrays, according to Heffernan.

With scale-out storage, VMs need to be copied between the different arrays, but some VMs can be as large as 2TB, so transferring them across a network can be very slow. When you multiply this thousands of times, the system often becomes unworkable.

Big enterprises need to scale deep

In scaling deep rather than scaling out, by adding storage underneath and managing it all from a single controller, a lot of the cost and complexity can be removed, according to Heffernan. However, this approach requires education of the market – something that virtualisation vendors are reluctant to do.

“Companies like VMware and Red Hat can’t educate too much, because the more they educate, the more they’ll hurt their ecosystem,” he said. “Companies like NetApp will struggle because they don’t have the technology to support scale deep storage. So the virtualisation vendors are in a balancing act.”

The upshot of this is that networks are becoming overloaded, increasing the risk of downtime. In a virtualised or cloud environment, downtime can be catastrophic, because when a server crashes it can affect hundreds of applications.

“The biggest challenge with network is you’ve got bandwidth, and you can keep adding bandwidth but there’s a limit, and then it can break,” said Heffernan. “Telecommunications companies are struggling around the world because no one has got any money to upgrade the networks. They are creating a bottleneck.”

Heffernan said that one company that is struggling with the scale-out approach to enterprise storage is Hewlett-Packard. Despite recently buying storage vendor 3PAR, the company still partners with Hitachi for large-scale enterprise storage, and the company's P9500 storage array is a simply a rebranded version of Hitachi's VSP enterprise storage array.

“HP have a bit of a challenge because, in all of the stuff that they've bought, from a storage perspective they still don't have the enterprise footprint. All of their products are small and medium business. So in the enterprise area they had to partner with Hitachi to get that part of the portfolio,” he said.

Having traditionally catered to a very niche market of military, finance and government organisations, Hitachi Data Systems is now seeing its customer base expand, due to the growth of virtualisation and Big Data. While Hitachi's storage products are expensive compared to scale-out storage offerings, Heffernan claims they are “bullet-proof”.