The 1,300 "citizens" of Colonial Williamsburg who work at the pre-Revolutionary War history museum might be living in the past. But the roughly 2TB of data and multimedia files they created last year to help serve tourists and history buffs is a modern-day challenge for the foundation's IT department.

"We could never get users to own up to doing any kind of life-cycle maintenance on their files… even though half of the data in their directions hadn't been used in more than 180 days," says Sean Maisey, manager of IT operations and engineering. “If we just allowed users to save files and not manage it in any way, we were on the road to needing another [2TB Network Appliance] FAS250 appliance every year.”

Maisey last year began addressing the problem by installing a file-area network (FAN), a newfangled counterpart to a storage-area network (SAN). Whereas SANs handle block-level data, such as from database management systems, FANs deal with unstructured data, such as Word files, spreadsheets, digital images and PDF files.

Market watcher Enterprise Strategy Group says as much as 80 percent of the data on corporate networks is unstructured and traditional means of managing it have become inefficient and costly, as companies have added individually managed network-attached storage (NAS) devices and file servers.

Brad O'Neill, senior analyst for the Taneja Group, says he coined the term FAN last year after finding "there was an increasing amount of confusion about which technology was available to solve the file management problem." EMC, Network Appliance and Brocade are among vendors that have embraced the term.

Customers seem to be embracing it, too. Teneja Group found recently that more than 60 percent of organisations surveyed said managing files was a top priority for them and that better than half are either evaluating or deploying a FAN.

What makes a FAN?

File-area networks can consist of a host of technologies, processes and products, including:

- NAS/SAN gateways, NAS devices and file servers.
- A method of presenting files using Unix/Linux Network File System or Microsoft Common Internet File System protocols.
- The ability to aggregate file systems and organise them as a single namespace or virtualised pool of information.
- Management software for classification, virtualisation and migration.

Some FANs also involve new devices, such as file virtualisation appliances. Colonial Williamsburg installed an Acopia ARX box in front of its Network Appliance and Windows file servers, enabling centralised management of files and pooling of storage capacity. The setup allows for migrating less frequently accessed files to less-expensive storage systems.

“Maintenance was difficult with [separately managed filers],” Maisey says. “For example, with the Network Appliance filer, if we wanted to upgrade the operating system, we needed to rebuild all our volumes. We needed to use tape to move everything off and then on again — it was an onerous process that we avoided doing whenever possible. Acopia lets us to move data around dynamically without huge outage windows.”

Maisey is using three Network Appliance FAS250 file servers with 2TB of capacity each to store home directories and shared departmental folders. Each server is mirrored to a Network Appliance NearStore R200 device located across Colonial Williamsburg’s campus.

Early adopters

Hubbard Benedict, team lead for storage services and systems engineer for digital media company Corbis in Seattle, has more than 100TB of images located on an array of storage devices. He deployed a FAN based on Brocade's Tapestry StorageX software, Windows and Unix file servers and EMC's Celerra NAS array, which together let him replicate and migrate data from one device to another.

“These images needed to be kept on appropriate storage platforms -- what's appropriate for one type of rendering isn't necessarily appropriate for another," he says. "So pretty soon we had multiple vendors and multiple storage platforms serving up different types of renderings of the same images, and managing that and moving it around has been tedious.”

Corbis has two different levels of service and two distributed file systems for each, Benedict says. “Some files that directly support the Web site or customer fulfillment have a higher level of availability. The raw scans, while being extremely important, are stored on a different type of storage.”

Jim Poehlman, chief information technologist for wireless network processor and software company Ubicom in Sunnyvale, California, says moving to a FAN has been driven largely by the need to keep storage systems up and running. He used to have to take parts of the network down for as many as six hours to migrate data from one array to another.

“It was critical that I extend the disk space without impacting any of the users or the jobs they were working on,” he says. “Any downtime would have cost us anywhere from $500,000 to $1 million and a potential revenue loss if we could not deliver products on time.”

Many of the products Poehlman looked at required that client software be placed on each server or NAS appliance, but he didn’t want his staff spending time on that.

Ubicom opted for a NeoPath Networks appliance to virtualise disk space across arrays.

“It took me a half hour to install and migrate data from one filer to another," Poehlman says. "Users were online at the time writing and reading files — they didn't see any hit in performance or any problems at all.”