Serial-attached SCSI is a very powerful technology. Not only does it radically ease the route to mixed disk mode arrays but it also offers a different way to link an array of storage to a bunch of servers. This is effectively a SAN, but minus the area network part of the equation.
Linus Wong, Adaptec's director of marketing thinks that: "SAS could even replace Fibre Channel. Absolutely." He is very keen on SAS. It is a storage interface and not a transport protocol, like Fibre Channel, although, "SAS is very similar to Fibre Channel at a technical level." One controller can link to both SAS and SATA drives making it very much simpler to build mixed-mode arrays; SAS for the fast storage; SATA for the slower, lower duty-cycle, nearline storage.
You can front-end the array controller with either Fibre Channel or ISCSI interface cards resulting in a cost-effective component. One platform can provide SATA or SAS storage and FC or iSCSI interfaces. But you can go further. Why bother to link the array to servers with Fibre Channel at all?
Let's get an obvious caveat out of the way. Fibre Channel provides distance capability and high-speed and a powerful infrastructure that enables you to provide sophisticated storage services to large numbers of servers. You can also start to think about running storage applications inside the Fibre Channel fabric. This appeals to enterprises, to organisations with lots and lots of servers and petabytes of storage present or visible on their horizon.
For organisations with up to, say, 500 employeers, and ten to fifty servers, a SAN fabric can well be overkill. Even if fabric component pricing is reducing - and Brocade's Ian Whiting reckons: "Price decline in Fibre Channel is 20-25% year on year." - the fabric management burden is still present.
It's not a SAN as we know it Jim
Wong says: "SAS can replace the fibre fabric." But it's not a SAN: "SAS isn't a SAN interconnect. An SAS cluster has no transport protocol." On the other hand: "SAS can support multi-node clusters. It's a switched serial interface." So we have a bunch of SAS drives in an array apportioned to individual servers using SAS - that's an SAS cluster. Or we have a bunch of SAS drives in an array aportioned to servers via an intervening Fibre Channel fabric, and that's a SAN. The functionality for the servers is roughly equivalent irrespective of whether a fabric is used to connect servers and storage or a set of SAS cables.
In a SAN each server sees its own storage. Its blocks 'belong' to it. Its LUN 'belongs' to it. It think it's talking to its own direct-attached storage when, in fact, there is a Fibre Channel fabric between it and its storage.
In a multi-node cluster each server is connected via an SAS cable to its storage. Just as in a SAN there is no actual sharing of storage. Each server 'thinks' it's connected to its direct-attached storage. And it is; only it can be up to 8 metres away and reside in an array with dozens of other SAS drives and shelves connected to other servers.
An SAS expander switch can sit between the servers and the arrays of SAS drives. And not just SAS drives. Wong says: "There can be SATA drives as well. From the get go. Absolutely."
So we might imagine a set of blade servers in a rack. They are connected by SAS to an SAS expander, equivalent in a sense to a Fibre Channel switch, and through it to their drives in a SAS/SATA array. Wong says they can boot off their SAS drive. Technically you can boot off drives at the end of a Fibre Channel fabric, but, Wong says, generally: "When you use Fibre Channel a server still boots off its local disk. It's complex to do it over the fabric." He says its easy to do it across an SAS link.
All this is: "very simple for blade-supplying OEMs. There is no need for fabric components for them and no need for fabric management for customers." So it is also very simple for customers and translates into lower cost.
At that point customers could be presented with three choices for centralising storage and provisioning it to servers. First is a SAN with a Fibre Channel fabric. Second is a SAN with an iSCSI protocol using Ethernet instead of Fibre Channel. Third is an SAS cluster using no networking at all.
Wong says this possibility: "is real. They each have their advantages and disadvantages."
Let's throw a couple of more points into this ingenious pot. SAS expander boxes exist already. PMC and Vitesse have product as does LSI Logic. Think of a $2/port type cost; relative peanuts.
The second point? SAS array controllers could run storage application and data protection applications: RAID, virtualisation, replications, snapshotting, etc.
With the very same arrays being capable of being installed in either Fibre Channel or iSCSI SANS merely by having an iSCSI or FC interface bolted on to their front why shouldn't server and storage system suppliers be interested?
Wong says Adaptec is talking to all the server suppliers producing blades and expects products in, roughly, 18 months.
PS. SAS clusters don't need TOEs - there is no TCP/IP offloading to be done. Nor do they need Ethernet NICs for storage access. So they save server CPU cycles for applications compared to iSCSI SANs with software initiators and they save cost compared to iSCSI SANs with hardware initiators (TOEs). Doesn't this all add up to a quite attractive proposition?