Topio has a lot of very bright people working for it; IT rocket scientists near enough. But they also have Chris Hyrne and he's in marketing. He reckons that business has a traditional and long-lasting headache called data migration. For which Topio has a new treatment, along with other ISVs. Let's call it a software aspirin.

The Topio software aspirin delivers any-to-any data migration over any distance. So there is no need for business to have a data migration headache at all.

But Chris Hyrne, the aforesaid Topio marketing VP, says that business still does suffer this needless headache.

TW: Set the scene for us Chris.
CH: In a recent data migration study I was mildly surprised to read that “85 percent of migration projects fall short of their goals because of problems with technical compatibility, unplanned downtime, and data loss or corruption.” This statistic is somewhat depressing because migration has been around as long as the IT industry has been with us.

TW: But why were you surprised?
CH: One would think by now customers would demand, or vendors would have delivered, solutions alleviating the pain. I say mildly surprised though.

CH: In the open systems quest for price performance, distributed complexity in IT has grown substantially; now IT departments are looking to consolidate back to a reasonable balance. Storage vendors, looking to protect their installed base, have by-and-large offered proprietary migration solutions.

TW: So?
CH: This has left a void in the market that is being filled by ISVs with more flexible server-based offerings. These software-only approaches can reduce project times and compatibility hassles of migration across heterogeneous environments. When considering these alternatives however, three criteria should be kept in mind - platform coverage, network utilisation, and recovery performance.

TW: Topio is one of these data migration ISVs and it, goes without saying really, reckons it knows what it's talking about with regard to these three topics. What Chris Hyrne wanted to do was to tell us what businesses should look for in these three areas. In regard to platform coverage?
CH: In general, the broader the platform coverage the better, but make sure you inspect a vendor’s prerequisites. Some require specific application software, file systems or volume managers; make sure the solution supports different storage architectures as well. Also define storage source and target mappings based on all required data for application recovery. If you don’t you could leave some necessary data behind.

TW: Could you give us an example please?
CH: Exchange application data can be SAN-resident, but system disk data may be on the server’s internal drive. Both are required for full – and quick - recovery.

TW: Okay. What about network utilisation?
CH: The time needed to synchronise source data with target data over the network can be prohibitive. I’d recommend that you plan to use data compression/reduction as a means of reducing this burden. Such features should be a standard component of a data migration product. They can provide a 3-10 times improvement in terms of reducing the network load. But beware of vendor claims; performance varies widely depending on disk write activity. Also, consider offline initial synchronisation.

CH: This nifty feature is supported by a few enterprise-class offerings. With OLS you create a baseline point-in-time copy and ship it to the target site, where it is merged with subsequently replicated I/Os, creating a current, consistent image. This bypasses network synchronisation, saving you much time and expense.

TW: Your third point was recovery performance.
CH: No matter what your strategy for coverage and network utilisation, if you can’t quickly recover data and seamlessly cut-over to the new target system, interest in your project will increase exponentially from all quarters. You may well face a lot of pressure to get things moving faster.

TW: We think you are being ironic here. Panic-stricken threats around job retention are more likely. What can you do to preserve your career?
CH: First, to help avoid this problem, make sure your migration solution can recover from outages during replication, while still maintaining target data consistency. You don’t want a nasty surprise at the finish line, and you certainly don’t want to start over every time there’s a transient system or network outage during migration.

Secondly, do test before cut-over. Some solutions support testing in parallel with migration without production downtime. This step can save countless headaches at cut-over. It is to be warmly recommended.

Lastly, make sure you can recover applications with dependent data on multiple servers. For example, many companies run Exchange on one server and the related Active Directory on another. These, and other distributed applications such as SAP, demand recovery of multiple environments to the same point-in-time. For these requirements, data migration products should be evaluated with great care.

TW: We think Topio is saying that data migrations need planning and executing carefully. You need to understand the interdependencies between parts of your data which may be located in different places and plan a data migration that includes all the data you need. Clearly, data migration can be complex, time-consuming, and costly. However, Topio's Byrne says, alternative software technologies are emerging to challenge the incumbents. With careful upfront evaluation and planning these solutions can cut substantial cost, time, and risk, off data migration projects.