Michael Glenn was wasting storage, and he knew it.

A document scanning project had created a single, 1.4TB LUN of old court records. Glenn, senior IT manager for a US court, knew that only 6% of the files had been accessed in the past year and that the rest shouldn't be on expensive Fibre Channel disk.

His challenge was determining which 94% he could move at any given time to slower, less expensive Serial ATA drives. It turned out he already had the software he needed, Dynamic Storage Technology (DST), part of his Novell Open Enterprise Server 2 implementation, to create and automatically execute file movement policies based on when files were last accessed.

After spending a week tweaking the configurations last spring, Glenn says, "I just let it alone. It's been working great," freeing up at least a dozen Fibre Channel drives. By reducing the number of active files, he also cut his daily backup time from 14 hours to 47 minutes.

Installation was simple, and configuration required just migrating the old LUN to the SATA drives, renaming that LUN, creating a smaller LUN on Fibre Channel to replace it, and designating the new LUN as the primary volume and the old LUN as the shadow. "Then I started setting up the migration rules," Glenn says. There was no extra cost for DST, he adds, but he estimates he saved about $140,000 through reduced demand for disk drives and power.

Glenn is one of the early beneficiaries of a new technology called automated data tiering, which automates not just the movement of data, but also the task of monitoring how data is being used and determining which data should be on which type of storage. Such automated tiering isn't yet in the mainstream because few vendors offer the technology and it hasn't been proved to work in very high-end, transaction-intensive environments. Also, it's typically used only within a single vendor's arrays or file system or supports only a limited number of storage protocols or topologies. But for organisations with simpler needs, the automated tiering tools available today are more than good enough.

How Tiering Became Automated

"Tiering" means moving data among various types of storage media as demand for it rises or falls. Moving older or less frequently accessed data to slower, less expensive storage such as SATA drives or even tape can reduce hardware costs, while putting the most frequently accessed or most important data on faster, more expensive Fibre Channel drives or even solid-state drives (SSD) boosts performance. Finally, automating the entire process prevents it from getting bogged down in the data classification and policy-setting that hampered earlier "tiering" efforts such as information lifecycle management (ILM).

Ready, Set, Implement?

Think your organisation is ready to tap into the benefits of automated data-tiering technologies? Consider these issues first:

  • Does it provide the mix of file- and block-level tiering you require?
  • Can you override the automatic tiering for performance or data-retrieval reasons?
  • Does it support features such as thin provisioning or deduplication if you're using them?
  • Does it, or will it, support sub-LUN tiering?
  • Does the vendor provide a growth path for further automation?