Hadoop is all the rage, it seems. With more than 150 enterprises of various sizes using it, including major companies such as JP Morgan Chase, Google and Yahoo, it may seem inevitable that the open source Big Data management system will land in your shop, too.

But before rushing in, make sure you know what you're signing up for. Using Hadoop requires training and a level of analytics expertise that not all companies have quite yet, customers and industry analysts say. And it's still a very young market. A number of Hadoop vendors are duking it out with various implementations, including cloud-based.

Most importantly perhaps, don't buy into the hype. Forrester Research analyst James Kobielus points out that only 1% of US enterprises are using Hadoop in production environments. "That will double or triple in the coming year," he expects, but caution is still called for, as with any up-and-coming technology.

To be sure, Hadoop has advantages over traditional database management systems, especially the ability to handle both structured data like that found in relational databases, say, as well as unstructured information such as video, and lots of it. The system can also scale up with a minimum of fuss and bother.

eBay, the online global marketplace, has 9 petabytes of both structured data on clusters from Terabyte, as well as unstructured data on Hadoop-based clusters running on "thousands" of nodes, according to Hugh Williams, vice president of experience, search and platforms for the company.

"Hadoop has really changed the landscape for us," he says. "You can run lots of different jobs of different types on the same hardware. The world pre-Hadoop was fairly inflexible that way."

"You can make full use of a cluster in a way that's different from the way the last user used it," Williams explains. "It allows you to create innovation with very little barrier to entry. That's pretty powerful."

Scaling up, and up

One early Hadoop adopter, Concurrent, sells video streaming systems. It also stores and analyses huge quantities of video data for its customers. To better cope with the ever-rising amount of data it processes, Concurrent started using Hadoop CDH from Cloudera two years ago.

"Hadoop is the iron hammer we use for taking down big data problems," says William Lazzaro, Concurrent's director of engineering. "It allows us to take in and process large amounts of data in a short amount of time."

One Concurrent division collects and stores consumer statistics about video. That's where Hadoop comes to the rescue, Lazzaro says. "We have one customer now that is generating and storing three billion records a month. We expect at full rollout in the next three months that it will be 10 billion records a month."

Two key limitations for Concurrent in the past were that traditional relational databases can't handle unstructured data such as video and that the amount of data to be processed and stored was growing exponentially larger. "My customers want to keep their data for four to five years," Lazzaro explains. "And when they're generating one petabyte a day, that can be a big data problem."

With Hadoop, Concurrent engineers found that they could handle the growing needs of their clients, he says. "During testing they tried processing two billion records a day for the customer, and by adding another server to the node we found we could complete what they needed and that it scaled immediately," Lazzaro says.

The company ran the same tests using traditional databases for comparison and found that one of the key benefits of Hadoop was that additional hardware could easily and quickly be added on as needed without requiring extra licensing fees because it is open source, he says. "That became a differentiator," Lazzaro says.

Another Hadoop user, life sciences and genomics company NextBio, works on projects involving huge data sets for human gene sequencing and related scientific research.

"We bring in all kinds of genomics data, then curate it, enrich it and compare it with other data sets" using Hadoop, says Satnam Alag, vice president of engineering for NextBio. "It allows mass analytics on huge amounts of public data" for their customers, which range from pharmaceutical companies to academic researchers. NextBio uses a Hadoop distribution from MapR.

A typical full genome sequence can contain 120GB to 150GB of compressed data, requiring about half a terabyte of storage for processing, he says. In the past, it would take three days to analyse it, but with 30 to 40 machines running Hadoop, NextBio's staff can do it now in three to four hours. "For any application that has to make use of this data, this makes a big difference," Alag says.

Another big advantage is that he can keep scaling the system up as needed by simply adding more nodes. "Without Hadoop, scaling would be challenging and costly," he says. This so-called horizontal scaling, adding more nodes of commodity hardware to the Hadoop cluster, is a "very cost effective way of scaling our system," Alag explains. The Hadoop framework automatically takes care of nodes failing in the cluster.

That's dramatically changed the way the company can expand its computing power to meet its needs, he says. "We don't want to spend millions of dollars on infrastructure. We don't have that kind of money available."