Dan Pollack of AOL had a big data problem -- 8 petabytes of unstructured data to be exact -- data that his clients had to access and manage, and he had to make sure was always available.
Pollack, operations architect for the Internet service provider in Dulles, Va., manages a traditional storage-area network (SAN), which has about 3,000 connected servers and uses 10,000 Fibre Channel storage ports, to store the transactional data generated by line-of-business applications. But it was the 8 petabytes of unstructured data stored on 1,000 file servers across his enterprise that was causing him fits.
Client applications need to access this data continually, whether it is for archival or sharing purposes. It was necessary for Pollack to have some way to manage the data and ensure its accessibility during upgrades, patches and maintenance activity. He also wanted to maintain this data on lower-cost storage than his SAN, because while it is important data, it doesn’t have the business-criticality of data stored on the SAN.
“While we found that [a SAN] is good for a lot of critical applications, there's another class of applications that require very high capacity but very low-cost storage in order to not go broke trying to provide services to your customers,” Pollack says.
Pollack started to investigate storage systems that would give him high capacity, less-expensive storage and the uptime he required. He found a number of cluster-based network-attached storage (NAS) systems from companies the likes of Panasas, Isilon, PolyServe (now HP) and IBRIX. After careful consideration, Pollack chose IBRIX’s Fusion.
“One of the things we liked about IBRIX over those other options is that they've re-implemented the interface as a file system as opposed to levering a traditional NAS protocol like NFS [the Network File System], CIFS [the Common Internet File System] or iSCSI,” Pollack says. “What that does is makes it extremely understandable for our clients and easy to access.”
He also chose IBRIX over NAS systems because he considered it to be more available.
“The large-scale NAS deployments are not as available as they ought to be,” Pollack says. “We have a long, checkered experience with large NAS deployments and very large systems that support customer-facing applications.”
“In order to avoid this availability problem, we wanted to go with something that was cluster-based and non-coupled,” he says.
IBRIX’s Fusion is a software-only system, which installs on each of the 1,000 file servers in Pollack’s cluster. With Fusion, Pollack can use any storage media he wants, unlike packages such as Panasas, which require use of their own storage.
“With IBRIX we can select our own hardware at the lowest cost possible,” says Pollack, who is using direct-attached storage consisting of 12 700GB ATA drives per server.
As for uptime, when Pollack wants to do upgrades or maintenance, he can take down a single server in the cluster, perform the upgrade and then bring the node back up without disrupting the operations of the rest of the cluster.
“There may be a minimal data unavailability on whatever is offline but every other server is available,” Pollack says. “Any unavailability is a small part of the total complex and it doesn't affect the actual behavior of the rest of the complex.”
When Pollack’s cluster grows, he can add new nodes in without taking down the cluster. Data is balanced across the cluster as it is added.
However, there is one thing missing though from Pollack’s wish list: the ability for the cluster to automatically redistribute data to other nodes in the cluster when a failure occurs.
“It's a requirement we gave to the IBRIX guys that they should have delivered in the next couple of months,” Pollack says.
Find your next job with techworld jobs