This is the second part of a two-part article. The first part was published yesterday, and you can find it here.

If Joe did this, nice things would happen. By simply grooming the production systems of T&D copies, Joe unclogs a lot of space, improves performance and lowers all the other associated costs. Joe just took a universal pain in the rump from the production IT staff and made it disappear. The T&D folks have a consistent schedule to get fresh whole data sets to play with. The security people are happy that there are fewer copies of stuff everywhere. The backup guy is happy that the extra stuff isn't killing his systems. The finance guy is happy because the cost of the T&D infrastructure is an order of magnitude lower than the production stuff. The licensing costs alone for running the applications and database on bigger machines are huge, and now they will be pushed lower or even go away.

If Joe did this, he might also figure out that not only should XYZ keep T&D data off of production, but that even the production data itself is "groomable." Ninety percent of the data that makes up production is fixed content, or data that isn't going to change. It is no longer "dynamic," and as such, the same questions can be asked of it: If it isn't going to change, should we still back it up the same way? Should we still have four primary copies of it? Shouldn't we put it on infrastructure that has attributes that are more aligned with the current state of that data?

If the data isn't changing but the attributes are, then continuing to do things the same way is illogical. If, after some period of time, access of a certain piece of formerly transactional data went from frequent to never, why wouldn't we put it through the same exercise that we did with the T&D data? If we moved that data out of "production" physically but maintained its place logically (i.e., we could still access it in the same way if required, but it might take longer to show up), we could then do some really interesting things. By delineating our production data into "dynamic" and "static" (fixed) buckets under a single consistent, logical view, we would change things forever. Our static data would have a different life cycle -- with different attribute requirements as time (or whatever metric) moved on. We would move it out of our "Tier 1," outrageously expensive gear and onto much lower-cost gear, which would mean that our Tier 1 storage would perform at optimal levels all of the time.

In theory, if our dynamic-to-static flow was predictable, we might never again have to buy another piece of hardware or upgrade another license for our production systems. We would not only move data off of our most expensive, mission-critical stuff; we would change the attribute requirements of that data when we did.

For example, we might keep four primary mirrors of that data while it is dynamic, and back those up every hour to a disk target, and replicate those targets every eight hours off-site, etc. But we'd stop doing that once the data became static and we moved to the "static production" state. Perhaps we would make sure we had one mirror locally and one online backup locally, and another copy at the DR site, with three "oh no" copies on tape. Then we'd never back it up again, because what would be the point?

If you did that, you'd save a lot more than 25 per cent of your power and cooling budget. You would save so much money, you would probably affect the earnings per share of your stock. You might want to negotiate a bonus upfront!

Vendors, who are naturally opposed to such intelligence unless they're the ones getting all of your money, are starting to wake up. The fact is, the very nature of data is changing, so while just jamming more of the same down your throat might help vendors in the short term, eventually it will cause them more problems because you won't be able to ingest any more stuff from them. They are already seeing it. Those that learn to cannibalise themselves are the ones that survive long term. The principle of adaptability in Darwinism works.

IBM bought Softek to do things like this -- or at least some of the things like this (see "IBM, Hitachi buyouts could destroy good products"). They are the ones that are at the very core -- they are the mainframe. By providing customers with a way to groom production systems, they take food out of their own mouths. There have been heated arguments inside the Big Blue machine for just this reason. The bet is that while it may push off mainframe revenue, the company will pick it up in other areas.

EMC owns tons of the storage market attached to those mainframes. Oracle gets a big chunk of that pie too. By helping customers groom their production worlds and create new efficiencies, IBM takes money away from EMC and Oracle -- and gives itself a shot to compete for that storage business, and the data management functions that Oracle provided. Network Appliance has been working with Solix to pull things out of those same environments and have them land on NetApp boxes. Not only is a NetApp box a heck of a lot less money than a Tier 1 mega-array, but once the data is there, people are astounded at how easy it becomes to do really useful things.

One bigwig at a Fortune 1,000 firm told me that the company can now create zero-footprint copies of its production data instantly using NetApp's writable Snapshot, which lets it do things that weren't even been remotely possible previously. He said, "It was like we hired four more people with the time we saved, and now things that took days take seconds." Most folks really can't afford to stop production to do backups or run reports, but they have had no choice. Now they do, and they now do it securely.

EMC won't like losing the core disk business, but if it can take the server dollars and the application/database dollars, and get the customer to see the light, it has a whole new opportunity to sell the other thousand things it has in its bag. If it's fixed content, why not let EMC-owned Documentum manage it? Who needs Oracle?

It's sort of like being a stock trader: You don't care whether it's up or down, only that a transaction is occurring. If you are an "emerging" player, such as Isilon or Pillar, this is an opportunity to shine. If you are a little guy trying to displace Symantec's NetBackup, this is your chance. Real change opens doors.

Process changes of this magnitude are truly game-changing -- for IT and vendors alike. It tends to take a massive event to get either side to move out of its comfort zone, however, and not being able to buy any more power might just be that event. Once something like that occurs and people are forced to look at new ways of doing things, then Pandora's box might fly open. With the lights turned on, people can see. Sometimes things don't look as good with the lights on -- just ask my wife.

Send me your questions -- about anything, really, to [email protected].

Steve Duplessie founded Enterprise Strategy Group in 1999 and has become one of the most recognised voices in the IT world. He is a regularly featured speaker at shows such as Storage Networking World, where he takes on what's good, bad -- and more importantly -- what's next. For more of Steve's insights, read his blogs.

This is the second part of a two-part article. The first part was published yesterday, and you can find it here.