The only reason that scale-up storage is still so predominant in the enterprise is because most business applications were built to run on scale-up architecture, but as demand grows for big data applications, enterprises will increasingly be looking to adopt a scale-out model.

So says Jason Collier, CTO of Scale Computing, who was recently in London for the UK launch of the company's HC3 “datacentre-in-a-box” solution. According to Collier, applications that are written for the enterprise are not written to take advantage of big scale-out infrastructure.

“A lot of enterprises are still running Microsoft apps, the SAPs, the Oracles, that are engineered to run on what they're running on today, but that will change,” he said.

“Taking unstructured data in an organisation and running analytics on it – that cannot be done on a traditional scale-up model. The way the data structure's laid out, there's not enough compute horse power that you can hook up through a single array to pull off that kind of analytics.

“When those applications do get built for doing analysis of unstructured data, that's when you're going to see that application shift in the enterprise adoption of scale-out.”

Collier said that organisations working in the field of high-performance computing (HPC) have relied on scale-out storage for years, because they require a certain level of flexibility and cannot risk having a single point of failure.

Web 2.0 companies like Facebook and Google, as well as cloud providers like Amazon and Rackspace, also all have a scale-out architecture running on commodity hardware. This is because it data is distributed and therefore very safe and quick to access.

Meanwhile, enterprises are becoming technology laggards, because they continue to use traditional scale-up systems from the likes of EMC and Hitachi Data Systems (HDS) while massively growing their data.

“It is going to be very cost inefficient when you compare that to a stack of commodity hardware running the cheapest drives you can sling behind them, but still having the resiliency where you can lose racks and lose data centres and still have availability of that data,” said Collier.

Some companies are looking to the cloud as a way to handle their big data needs without having to invest in massive storage arrays, but Collier warned that the feasibility of cloud computing is entirely dependent upon the application.

“The access to the data, the security round the data, those are things that are still concerns for enterprises,” he said.

“The large cloud providers are not set up to provide large instances to run applications in. They're set up basically for test and development. So most of the things that go on inside Amazon, Rackspace and the other cloud providers are basically spinning up small instances to run test applications on and spinning them down.”

Collier said that large enterprises will continue to invest in their own storage arrays because most of them are not yet ready to fully virtualise their applications.

“It's one of the reasons why VMware and some of the other virtualisation companies aren't growing as fast as they have been. It's because they've saturated the test and dev in these enterprises but they haven't been able to move them to production.”

In the mid-market, however, organisations will not buy virtualisation for test and development but because they want to have high availability for their production apps.

Virtualisation, in combination with scale-out storage, is the easiest way to ensure high availability, but a lot of mid-market businesses do not have the in-house expertise to do the work themselves.

“In SMB and mid-market, less that 10 percent have done full-on production-level virtualisation, and almost none of them have actually done any virtualisation whatsoever, and that primary driver is the complexity,” said Collier.

“They have the same needs as the enterprise, in that they need high availability for their applications, they need to be able to dynamically grow their business when they need to, but they don't have the virtualisation experts, they don't have SAN administrators, they don't have networking guys with that level of expertise.”

With HC3, Scale Computing hopes to solve this problem by providing a “hyperconverged” solution that combines servers, storage, and virtualisation in a fully integrated appliance.

“Scale-out for us is not a selling feature. What comes with the scale out architecture is an inherent level of high availability,” said Collier.

“The reality of it is it also gives you very granular scalability. If you wanted to scale an HDS system, or EMC for that matter, in a scale up architecture, you have to fork lift that out, bring in a bigger one and migrate all your data.”

As a scale-out solution, additional compute or storage resources can be added to an HC3 cluster within minutes, with applications and data failing over between nodes in the event of equipment failure.

“With standard converged architectures you're still managing each individual component. The complexity doesn't go away,” said Collier. “With our hyperconverged solution there is no separation of the servers and the storage and the virtualisation stack, it's all one piece.”

Scale computing claims that HC3 allows organisations of 500 or less, with an IT department of 5 or less, to configure and deploy VMs without the planning, troubleshooting, and reconfiguration exercises that can consume as much as 25 to 50 percent of IT administrator time in the midmarket.

“There are several ways you can build a car. You can go out and get tires, a transmission, buy an engine, get a book and learn how to put it all together – that's basically how infrastructure is done today. Or you can go out and buy a Mini Cooper.

"We're selling Mini Coopers," concluded Collier.