Network Appliance is a phenomenon. It emerged from the early years of network-attached storage (NAS) technology development as the pre-eminent filer company while lesser companies went to the wall. Then it focused on the enterprise and avoided NAS commodity hell, as it added data protection features such as snapshots to its core operating system, Data ONTAP 7G, with its WAFL (Write Anywhere File Layout) technology. The O/S provides virtualised storage, has had iSCSI SAN capability added and NetApp has expanded into the Fibre Channel SAN space.

It bought Decru to add a data encryption facility to its offering, though not integrated into Data ONTAP, but provided as an in-line box, the DataFort, on a customer's network. It bought Spinnaker and has added clustering of its products via a separate version of Data ONTAP, termed GX. Then it bought Topio for that company's well-respected any-to-any replication technology. It has also added a de-duplcation capability to Data ONTAP.

On the relationship front, NetApp has a relationship with Kazeon for content classification and indexing. There is also an OEM relationship whereby IBM supplies NetApp as N-Series products.

NetApp has until very recently, maintained an almost unerringly constant growth rate of above 20 percent in revenues. However, the most recent quarter saw NetApp's revenue growth rate fall to 11 percent and the results statement was preceded by two statements as the company reset Wall Street analyst expectations downwards. It was a shock and helped focus attention on the fact that NetApp has a host of competitors, many of who are selling storage technology at lower prices than NetApp.

NetApp competition

EMC is NetApp's prime competitor and, it seems fair to judge, is competing in a stable way with neither company capable of seriously impacting the other in the pure disk-based storage market concerned with Fibre Channel and iSCSI storage area networks (SAN), and NAS. Outside that confine EMC is undoubtedly the stronger overall with its backup products from Legato and Dantz, content-related Documentum, security products from RSA, outstandingly successful VMware, content-addressable Centera storage product and multiple other products.

It is reasonable to form an impression that EMC is far more successful buyer of other companies and technologies than NetApp; contrast VMware and Spinnaker for example, and that EMC has little chance of being knocked off its top storage dog perch by what seems like the storage world's permanent number two, NetApp.

Across the disk-based storage universe there are a seeming host of smaller companies jostling around various parts of NetApp's product range. For example:-

- Clustered filers - Isilon (now post-IPO)
- Unified SAN/NAS storage - 3PAR, Pillar Data
- Very high-performance NAS - BlueArc, ExaNet
- iSCSI SANs (with added NAS) - EquaLogic, Left Hand Networks, Compellent
- File storage virtualisation - Acopia (recently bought), PolyServe, Ibrix
- De-duplication - Data Domain, Diligent, Avamar, FalconStor, Sepaton, Quantum and more
- Virtual Tape Libraries - Sepaton, FalconStor (EMC, Sun, IBM)
- Thin-provisioning - 3PAR, DataCore, (plus established players like HDS and HP -adaptive provisioning)

Some of these competitors have completed or are seeking IPOs as their results and a buoyant storage market persuade them and their backers that they can now successfully grow as public corporations.

NetApp's way of adding technology

NetApp doesn't use either commodity Unix or Windows as its core O/S. Unlike EMC which does not have a core O/S but maintains several independent code bases for its quite wide collection of products, NetApp believes that having a single code base makes it easier for its customers to own and manage its storage products, add new ones or migrate between them.

This means that, as a new storage technology comes along, NetApp has quite an absorption job to do because it as to understand how to implement the technology without upsetting the functions and performance of its existing products. It's no use, for example, adding A-SIS de-duplication if it stops Flexclone technology working.

Consequently, when a new storage technology comes along then NetApp has a harder job introducing it, in its integrated way, than a supplier who adds a point product by either reselling it - HDS with Diligent de-duplication - or by buying it as a continuing company - EMC with Avamar.

Techworld has discussed how NetApp responds to new technologies with John Rollason, NetApp's EMEA solutions marketing manager.

TW: Is NetApp's use of a single operating environment, dataONTAP, slowing down and restricting its use of new technologies - global namespace, clustering, de-duplication, thin provisioning - which other suppliers bring to market faster than NetApp.

John Rollason: "NetApp’s strategy is to build innovative solutions that are deployable in enterprise environments and improve customers’ business agility, lower risk and TCO (Total Cost of Ownership). Unified storage architectures are key to this strategy and simplify an otherwise complicated environment."

"The benefit of a unified architecture, such as Data ONTAP, is that new technologies (such as clustering, de-duplication and thin provisioning) need to be designed, engineered and tested only once and can then be applied across many different storage applications and requirements (such as primary, secondary, archive and compliance). While our main competitors have many different and isolated operating systems – some 8 or more- we’ve found that unified storage solutions reduce the amount of wasted Research & Development and integration effort required."

"Finally, in many cases it is the combination of these technologies (e.g. unified storage, virtualisation, thin provisioning, smart copies AND de-duplication) that provides the greatest customer benefit."

TW: Will DataONTAP 7G and GX be combined? Wouldn't it be better to have separate operating environments for different products? How can a low-end NAS usefully use the same O/S as a high-end Fibre Channel storage array?

John Rollason: "The intellectual property NetApp has within Data ONTAP means that we are able to tune the cost/performance of an array to match application requirements. Where possible we prefer not to maintain unnecessary operating environments (for example the previous consolidation of Nearstore R200 and FAS systems with our Nearstore on FAS licence strategy) for the reasons above. Another example of the flexibility of Data ONTAP is that it has also allowed us penetrate the SMB market where a variant is deployed with the StoreVault platform."

"NetApp’s strategy is to continue to combine the scale-up functionality of 7G and scale-out functionality of GX over time to maintain the TCO and investment protection benefits of a unified approach."

TW: Will encryption facilities be introduced directly into NetApp storage products or will encryption remain the preserve of a network-attached box like the Decru one? Why should this be so?

John Rollason: "Providing you have robust technology (such as that proven within Decru), building encryption into products (network-attached or otherwise) is not a major issue. The real issue is Key Management. NetApp has years of experience in this field with the Decru Lifetime Key Management (LKM) solution and will continue to invest in this area."

This is an intriguing answer as John doesn't say whether encryption technology is a network-attached box preserve or not. NetApp is well-known for not publicly announcing its product technology directions before it is ready to do so. The A-SIS de-duplication technology was two years with customers testing it before formal announcement.

But, perhaps, managing customer expectations in this respect is worth a little more attention from NetApp.

TW: Will NetApp's insistence on integrating new technologies into its core enterprise product set at the pace it can accomplish render it more and more susceptible to competition from nimbler small vendors who restrict its sales in individual storage areas as enterprises buy their kit because it is cheaper/faster/better? I'm thinking of VTL, de-duplication, thin provisioning. filer clustering, file virtualisation, high-end filer performance and so forth?

John said his answers above are relevant to this before adding:

John Rollason: "There will always be opportunities in any market for smaller players with niche solutions, but we haven’t seen a major impact. NetApp has VTL, de-duplication, thin-provisioning, clustering, file virtualisation and the highest file performance today. Where it makes sense, NetApp has always offered specific solutions to specific storage issues – for example VTL, but again, we’ve found that in many cases it is the combination of these technologies (e.g. unified storage, virtualisation, thin provisioning, smart copies AND de-duplication) that is best."

TW: What is NetApp's view of Fibre Channel fabric-based SAN storage management applications as opposed to storage array controller-based storage management applications?

John Rollason: "Both are possible. The most important aspect to consider is the ability of storage solutions to integrate with applications. Storage controller-based systems are the best way to achieve this today."

"The NetApp approach to this is our Manageability family. This is an integrated data management approach covering the Storage, Data, Server and Application layers. Products like SnapManager for Microsoft Exchange within this family are a good example of NetApp innovation and a major reason for NetApp’s success today."

John Rollason:How does NetApp's de-duplication compare and contrast to hash-based de-dupe algorithms, and also to data ingest and post-ingest de-dupe processing?

John Rollason: "Firstly, it should be noted that our A-SIS (Advanced-Single Instance Storage) de-dupe functionality is only part of how NetApp helps optimise storage for our customers Snapshots, SnapVault, FlexVols and FlexClones are all well established features that are widely used on NetApp solutions and already provides huge storage savings over traditional mechanisms."

"Relying purely on hash-based algorithms to identify identical data is not a method that NetApp is keen to follow. Using hashes to identify likely candidates for duplication is a better use of hashing technology - find the likely data rapidly and ensuring it is absolutely identical through byte-by-byte checking. In this way customers will not be opening themselves up to the risk (however slight) of silent corruption."

"Whether this is performed inline or post-ingest is irrelevant for some people: the key is that it can be performed in a rapid enough manner. For some, doing this inline is acceptable, while for others this will restrict their throughput capability during peak load times and it would be better to perform the de-duplication after the information is stored."

"The NetApp A-SIS de-dupe is designed to perform the processing after the data is stored. There might be a small overhead in capacity terms, but this is typically low in relation to the overall storage used and being able to chose when to run the de-duplication is an added benefit. For datasets with low change rates, this might be run only once per month or less."

TW: Is de-dupe for backup data, any nearline data, or primary data? Will this change over time?

John Rollason: "NetApp virtual storage solutions with A-SIS support flexible volumes that have data written to them using CIFS or NFS or as LUNs accessed using FCP/iSCSI. Basically it doesn’t matter how the data got on the NetApp storage system; A-SIS will deduplicate it."

"A-SIS is initially targeted to data retention/archival environments in its first release, focusing on archives of file data, for example: home directories, engineering development, Microsoft Office, e-mail archive, SharePoint, technical and general publications, and so on. It is reasonable to assume that this is part of on-going development in the evolution of NetApp storage solutions and that use cases will expand. The goal is to provide customers the ability to save space and reduce storage costs regardless of how or where the data comes from."

TW: Why is NetApp not using SAS drives?

John Rollason: "NetApp has plans to introduce SAS into our product family in the near term in a controlled manor as the technology reaches maturity and enterprise market acceptance increases. NetApp has a proven history of introducing innovative enterprise disk solutions, with the most recent success being SATA combined with RAID-DP (introduced 2002)."

TW: What does NetApp think of the idea of using flash memory-based 'hard drives' as a top, performance-led tier of online storage?

John Rollason: "Flash memory is continuing to develop, but the current cost-performance considerations makes it more likely to be of use in mobile devices in the foreseeable future. NetApp is a storage solution provider and as such will always take advantage of newer technology as it becomes both performant and cost-effective to use on a wider scale. With FAS devices supporting over half a petabyte today, NetApp has come a long way from the first 7GB SCSI-based devices shipped 15 years ago and it is fair to assume that over the next 15 years we might expect more leaps in technology to help our customers store their data even more efficiently."

"NetApp’s software-led approach with Data ONTAP will allow us to integrate innovative technologies such as this as the technology maturity develops."

TW: Does NetApp think the e-discovery/compliance/indexing/classification area of the storage market is becoming a big sector?

John Rollason: "Customers are facing a number of information management challenges. The data continues to grow at an alarming rate and rapid access to information is becoming a higher priority with the regulatory climate only getting tougher. This area of storage management is likely to increase to help meet litigation support requirements as well as pure business needs."

"Today, NetApp has seen rapid growth in interest around products and services for e-discovery, indexing and classification."

"Compliant storage is also a rapidly evolving market with many players. In this case, NetApp’s unified approach really makes sense to protect customers’ investment. Software based solutions such as SnapLock and LockVault are unique in this respect."

TW: Does NetApp want to be considered a technology leader? If not, why not? If so then how is it going to gain that perception if it is slow to adopt new technology (meaning other suppliers deliver product first)?

John Rollason: "NetApp continues to be a technology leader. We continue to deliver new technologies as part of our unified solution approach where it makes sense (e.g. de-duplication). We continue to believe this long term strategy far outweighs the benefits of bringing ‘point products’ to market."

NetApp doesn't think that technology leadership merely means being first to bring a technology to market. It reckons that its customers believe technology leadership is best-demonstrated by NetApp bring new technology to them in a usable, integrated and performant manner that fits in with their existing NetApp environment.

NetApp doesn't think that the quarter's off financial result is anything other than a blip as enterprise customers delay spending on discretionary storage projects such as data archiving. An underlying message from this interview is that the NetApp market fundamentals are as strong as they have ever been and the company's progress should continue.