The traditional clarity of NAS over Ethernet, and SAN over switched Fibre Channel, is being challenged by suppliers who see speed and/or scalability bottlenecks in these traditional approaches. By using specially-designed hardware and software architectures they aim to overcome these limitations.

On the NAS side there are BlueArc and Exanet. An equivalent supplier pair on the SAN side are 3Par and XIOtech.

The claims made, generally include the use of specially-developed storage controller ASICs to speed data movement and provide resiliency, virtualisation and scalability across the storage arrays. One architecture involves meshed backplanes to interconnect controllers; another involves multiple dedicated buses.

The use of enhanced controllers, ones with lots of intelligence, make these suppliers similar in this way to Brocade, IBM and Cisco who are supplying intelligent SAN fabric controllers. However, these are based on, or are adjuncts, to Fibre Channel switches. The enhanced SAN suppliers mentioned above see limitations in such switched architectures.

Super NAS
BlueArc has an architecture, with its Titan Silicon Server, which splits controller functions into three different HW units: network interface module; file system modules; ans storage interface module. The network interface unit is the doorway to the outside world. It carries out the TCIP/IP and UDP processing needed. It is a TOE (TCP/IP Offload Engine) in a way. It both sends and receives standard and jumbo Ethernet frames and has suppport for many Gigabit Ethernet interfaces. There is support for link aggregation. Management functions are carried out through dedicated Ethernet and RS-232 interfaces.

The file system for this super-NAS device needs two modules; FSA and FSB. File system protocols such as CIFS, NFS, iSCSCI and NDMP are carried out in hardware. In fact, all aspects of file system operation are done in hardware. Blue Arc says this is the dominant reason why its Silicon Server systems perform so well. They have low latency, meaning they react faster, and high throughput.

The storage interface module provides RAID and striping functions. Files are broken into pieces and written across different disks. The claim is that this speeds both reads and writes because the tasks are spilt into pieces and performed in parallel. 3Par makes a similar point about its performance. BlueArc says performance increases as more storage is added to the system.

To increase similarity with 3Par, BlueArc has a large cache - it calls it a 'massive cache' - to speed data access even more. Titans can be clustered together with the cluster operating a single file system. Total storage can scale to 256TB with data throughput growing to a peak of 20 Gbit/s.

There is failover between the Titan nodes. There are also HW-accelerated data copying for local or remote replication with very fast restore operations, providing 'instant data availability in times of crisis'. The system has virtual volumes that can dynamically expand and contract and provide enhanced storage utilisation. It is tempting to see BlueArc as the approximate NAS mirror image of 3Par.

Super SAN
XIOtech's Magnitude also uses a clustered design in which controllers are separated from the storage. There are three main components:

- Dimensional Controller Nodes (DCNs): DCNs provide the intelligence within the cluster. Each DCN has access to all storage capacity across the cluster and provides the intelligence to present a single, shared, and virtualised storage pool to servers.
- Drive bays: Each drive bay is a unit of up to 14 drives that are connected to all of the DCNs in the cluster by dual Fibre Channel connections. Drives of diverse capacity and speeds can be intermixed, yet fully utilized, within each drive bay.
- Intelligent CONtrol (ICON) management platform: A secure platform that manages all of the accessible clusters using an out-of-band management path and highly intuitive browser interface.

The architecture is described in more detail in a white paper. The DCNs have their own independent storage and that is reminiscent of Exanet nodes.

Commodity or special design?
Back in the days before servers were either Intel, SPARC or PowerPC-based, there were a number of speciality designs that were built to be faster than so-called standard servers. Tandem built its NonStop systems for telecommunications carriers. Sequent built parallel processing servers. There were others too. The point is that they have gone. None on them built up enough customers to be able to afford the constant design refreshes needed to withstand the rise of the x86 hordes.

Tandem went into Compaq which gobbled Digital and which went into HP. Sequent went into IBM. Now HP's own RISC architecture is destined, probably, for replacement by 64-bit Intel. How much longer can Sun's SPARC and IBM's PowerPC designs survive against Intel?

The parallel in storage is the rise of commodity disk drives, which the four suppliers mentioned here support, and also the rise of commoditised components for storage networking and controllers. Ethernet and Fibre Channel networks, again supported by our four suppliers, Fibre Channel and Ethernet switch/routing devices, which they support outside their configurations, and x86-based controllers, which they generally do not.

Their IP is partly their ASIC-assisted hardware and special in-unit connectivity and partly their software intellectual property. That could run on Intel CPUs theoretically, leaving their ASICs and in-unit connectivity and SW as their real added value. Clearly that's worth a lot in terms of the performance, scalability, reliability, etc., they provide.

But is it enough to withstand the rising tide of commoditisation that swamped their server forbears, Tandem and Sequent? We wait and see.