A SAN performs as well as direct-attached storage (DAS) disk for a server, if the SAN works correctly. From the point of application performance having a SAN is no better than having DAS. The server still has to wait for disk I/O to start and complete. For systems that are limited in performance and use fast processors and fast storage the options are limited and dispiriting.
You could use faster servers still or you could use more servers and partition the application. Getting faster servers is expensive. High-end servers cost lots of money and each increase in performance costs much more money than the previous increase. Theoretically, you could use a supercomputer but the millions of pounds involved and application redesign would render it uneconomic.
If the view that server performance is limited by disk I/O waits is right then the answer, for back-of-the-envelope engineers, is to increase disk performance. Thats not right though. It assumes and as the old sales training rubric goes, never assume, it makes an ass out of u and me. The answer is to improve storage I/O.
Several businesses have found that replacing hard drives (HDDs) with flash memory solid state disks or SSDs substantially increases application performance.
For example, Chicago-based Eurex US, a financial exchange, has an electronic trading platform providing access to 1,800 traders in 700 world-wide locations. There were 86 million contracts traded in April, corresponding to a daily average turnover of 4.3 million contracts. The entire spectrum of activity, from trading to final settlement, is covered by a system with backend storage based on SSDs.
The data access time, at 20 microseconds is said to be up to 250 times faster than hard drive access time. Uptime is at the five nines level - 99.999 percent. So performance and high reliability are achieved.
Michael Gruth, systems and network support manager at Eurex US owner, Deutsch Borse Systems, said that without the SSDs, "we experienced disk queuing and under-utilisation of processors." Only SSDs can provide the response time needed.
Chris Johnson, VP storage systems for supplier DSI, said of using SSD technology, "We find that the larger customers gravitate towards the concept because they have done everything possible in conventional terms, i.e. larger servers and faster RAID. The need for faster response times drives them to think outside the industry-created box (of hard drive storage)."
SSDs can substitute for adding servers. Make the ones you have go faster by accelerating storage response time with flash memory. The Eurex DSI boxes can hold up to 64GB and be multiplied to provide terabytes of overall capacity.
HDD technology, based on moving heads, will take years to deliver a 250-fold improvement in access time. For performance-limited systems where application response time is vital and millions of pounds of money are at stake then SSD rather than server purchase could redefine what your systems are capable of.
Scalability and performance
Storage scalability is a problem, as everyone says. Disk virtualisation can improve storage utilisation but SANS using it are complex and the concept has been relatively slow to take off. Also the SAN and NAS split is limiting. Alternative approaches involve, in effect, turning disk controllers into storage engines and grouping the resulting storage plus storage engine into a grid of inter-connected boxes.
Growth in capacity is achieved by adding more boxes. Since each has its own I/O capability overall storage bandwidth is increased. The components work together inside some kind of unified file system scheme and can serve either blocks or files to connected hosts.
Its possible using this approach to use cheap components and achieve great capacity levels. HP is using this approach with its smart cells in its RISS product. These, based on technology HP acquired when it bought Persist Technologies at the end of last year, are archive engines. The overall architecture divides storage, indexing, search and retrieval tasks across a distinct set of these cells. Each is self-contained with a dedicated x86 processor, search engine, database, index and management layer, plus on-board IDE storage.
In effect the smart cells are storage and processing blades. They can be federated to create redundant, modular, grid-like computing and storage capacity. Additional cells can be added non-disruptively, making the system readily scalable. Added cells are automatically discovered, configured and contribute to the overall pool of storage.
The Persist technology included self-healing cells with internal mirroring and mirroring across cells, plus local and remote replication. This means component failure can be automatically recovered from.
Archivas has its ArC content-addressed storage in which there can be hundreds of nodes.
Exagrid has a storage grid NAS concept. Its grid of Linux-based storage severs acts as a single repository.
Clear Cube has a product in which users PCs are replaced by a screen, keyboard and monitor on the desk with a cable link to a PC blade in a rack in the data centre. Each PC blade has its own on-board EIDE hard drive. These disks are being aggregated together so that a virtual NAS exists with redundant components. It is another grid idea with the grid-like storage being only a small part of it.
Network Appliance is also developing storage grids. Ron Bianchini, VP and general manager of Spin Operating Systems for Network Appliance talked of where NetApp wanting one storage grid which would, be able to support different performance levels, different storage capacities, different quality of service, different protocols all from one cloud.
Google uses a storage and processor scheme that resembles a grid for its searches.
IBMs storage tank idea also has grid elements to it.
If storage engineers design storage engine bricks, as IBM might term them, combining an embedded PC with disk, then the resulting storage can be far more intelligent and far more scalable. It can serve both blocks and files, use more capable addressing schemes, such as content-addressed storage, be accessed using a variety of protocols, have high reliability and grow in size quickly and easily.
Jason Phippen, director of EMEA product and solutions marketing at Veritas, thinks that, "storage grid is an evolution of SAN.... I think the value proposition to the end-user is primarily quality of service orientation. ... Reduce capital expenditure whilst maintaining and increasing service levels."
So far businesses adopting such grid storage schemes have been coping with specific problems that they had to solve. Storage grids are a niche technology. However, With six suppliers mentioned here, two of them being IBM and Network Appliance, then the idea of storage grids deserves to be watched.
Just as cheap hard drives, in Serial ATA form, are displacing tape as the first line of backup storage, so too, perhaps, will SSD technology displace hard drives where servers cant afford to be waiting on disk I/O. Big wins are appearing with SSDs. The first big wins with storage grids have yet to happen. But this too could be another wave of storage development that could displace SANs where storage scalability and flexibility is limited by SAN limitations.
Put the two together, storage grids using SSD technology and we might imagine the fastest and most scalable storage ever. SSD technology needs to come down in price, very much so, for that distant prospect to become reality.
Find your next job with techworld jobs