For our final interview with a data protection management (DPM) vendor we talked to Lindsay Morris, the founder of Servergraph.
For comparison purposes may I suggest that you print these interviews as they are quite long.
Here are the Servergraph responses to the same questions which we asked the others:-
TW: First, could you provide an overview of what Servergraph is, what it produces and its position in the market.
LM: Servergraph is an agentless, web-based backup reporting and automation tool that finds (and can even prevent) missed backups, finds wasted space boosts performance and reliability of backup servers creates chargeback reports and compliance reports. In general, we save storage administrators time, and enable them to do higher-level jobs.
Servergraph has been available for Tivoli Storage Manager since 1999; more recently we have added support for Veritas NetBackup and Legato NetWorker. The company has about 200 customers, mostly Fortune 500 companies, and was purchased by Rocket Software in 2005. We work closely with IBM / Tivoli, and with user groups to get feedback on new features they want, new problems they're addressing, etc.
We don't do a lot of advertising, but have had tremendous success in the marketplace. Our product tends to be adopted by the people who are doing the day-to-day work, because they see how it solves their problems.
TW: What is your view of the data protection reporting/management needs of your customers now and going forward?
LM: Our customers need three main things:
- help with the drudgework of finding and fixing missed backups. Highly skilled Storage Administrators should not be looking for mis-configured network cards and training end-users in how to use their backup clients. They should be doing restore testing, researching cost-savings technologies, and tuning the configuration of their backup systems to make them even more reliable and efficient - again, saving money for the organization.
- help making their backup systems more reliable. Many CIO's pay a premium for the "hands-off, lights-out, policy-driven" idea. And they should. But actually getting that to work, in a large, real-world organization, is a big problem.
- capacity planning and performance improvement. Backup systems that were put in place 5 or 10 years ago had no idea how explosive storage growth would be. Now they are maxing out storage capacity, network capacity, CPU capacity, and so on. In many cases, there are wasteful policies and procedures that can easily be changed to save huge amounts of money - these performance improvements are just hard to find. And even if you do find them and take advantage of them, you may still have genuine needs for greater capacity.
TW: Should a DPM reporter cover all data protection operations? E.g.: Snapshots, Mirrors, Replication (async and sync), Fixed content reference storage (like EMC Centera, HP RISS), backup to virtual tape libraries, backup - all backup software products, backup to tape autoloaders, backup to tape libraries, off-site electronic vaulting, archive to optical media and devices - e.g. UDO file archiving, e-mail archiving, database archiving, encrypting data, fixed content data to a content-addressed store or similar.
LM: Ideally, yes. But we shouldn't let far-reaching goals prevent us from solving immediate real-world problems.
There is just one essential problem we're all trying to deal with: many backups fail quietly. This is a huge problem! In 2002, the Gartner Group said that 40 to 50% of all backups are not recoverable - and nobody realizes it! So management goes along happily until restore time, then heads start to roll.
You have to understand that the term "data protection" is not really about data protection. It's just the latest phrase coined (along with "secondary storage management", "disaster recovery protection", etc.) to describe software that deals with this essential problem of ensuring that the backups are usable. So, while technically "data protection" might well mean all of the above, people looking for solutions should never lose sight of the one essential problem.
If a product solves some other problems, too, well, that's great. But pushing for a product that does everything leads to bad decisions, like a beautiful car with voice navigation and a first-rate sound system - that doesn't go anywhere.
And, from a different point of view: many of the data protection operations you mention are really in the domain of the backup software. For example, if the DPM software simply monitors the success or failure of the agents that deal with email archiving, then they are in fact covering the email archiving problem.
TW: Does your software product do this?
LM: Our product covers:-
- backup to virtual tape libraries
- backup - Tivoli Storage Manager, Veritas NetBackup, Legato Networker
- backup to tape autoloaders
- backup to tape libraries
- off-site electronic vaulting
- archive to optical media and devices - e.g. UDO
- file archiving
- e-mail archiving
- database archiving
And does not directly deal with:-
- Replication (async and sync)
- Fixed content reference storage (like EMC Centera, HP RISS)
- encrypting data
- fixed content data to a content-addressed store or similar.
However, indirectly, it does detect some problems in these areas. For example, one of our customers used it to detect when a mirror had not been created. The mount point for the mirror was always backed up successfully, even when the mirror creation had failed - but in that case it was a tiny backup instead of the 200GB it should have been. Our rules engine let them alert on this condition. So though we did not monitor the mirror directly, we could report on problems with it.
TW: What is the product development strategy for your DPM product?
LM: We try to have a new release every quarter, and some interim releases as needed.
We try to balance our efforts to add to the product in three areas:
- Technical depth - and here we're so much deeper than anyone else, it perhaps should not be a primary focus. But we have a lively and involved user community, we listen to what they say, and when really good ideas come up, we bring them to life if we can.
- Cross-platform support - we already support the big three: Tivoli Storage Manager, Veritas NetBackup, and Legato NetWorker. For organizations with broader needs than that, a sister product, Servergraph Enterprise offers greater breadth - Backup Exec, ArcServer, CommVault, etc. - without so much technical depth.
- And management appeal - we continually work to provide enterprise-wide reports that are genuinely actionable: for example, reports showing the top 10 nodes with the most wasted space.
TW: Does DPM have to evolve into uDPM (universal DPM) so as to cover all the electronic data protection activities of an enterprise?
LM: If you phrase it that way, then yes.
TW: Does it need an enterprise-wide data protection policy?
LM: Certainly, every organization should have clear policies regarding data protection - and, in my opinion, regularly scheduled restore tests.
But no sensible DPM solution can require perfection on the customer's part first. Our product often has to go into environments that are half-built, or have been inherited from the person who left last year, or that have been allowed to grow unmamanaged. There are often no policies at all. And this is one of the best and highest uses of a DPM product; to point out the problem, and galvanize management into creating such policies before the audits occur.
TW: Does it need a standard interface (APPI) to all data protection hardware and software products rather than product-specific interfaces?
LM: That would make life easy, if there were such an interface. But I would expect the interface to be a least-common-denominator thing at first, and unable to deal with the complexities of individual backup products.
We leverage the information collected by the backup server. Products like Tivoli Storage Manager and Veritas NetBackup already have interfaces to all the hardware and software products they use, and can be made to collect detail data. For example, Tivoli collects data not only on the expected things - tape drives, backup sessions, and such - but it also knows the size of the local filesystems on each of the thousands of clients it backs up.
Using this, we can find the clients that are using 10 times or 100 times more space on backups than they use locally - a clear indication that something is terribly wrong. And this information is outside the bounds of what SMI-S is meant to do.
A standard SMI-S type interface would make everybody's life easier, but it isn't there yet, and we don't depend on it.
TW: If it does should this interface be the SNIA's SMI-S?
LM: SMI-S does look promising, and SNIA is the right group to push such a standard. I just hope that vendor wars don't cause competing "standards" to muddy the waters.