In moving from a typical Direct Attached Storage (DAS) to a Storage Area Network (SAN) -based storage infrastructure, many of the key components appear on the face of it to be straightforward, out-of-the-box solutions just plug them in and the system is ready to go. Yet, though a SAN is a total solution, the precise configuration of each component and how it is affected by different environments is critical.
The first step is to understand, fully, each of your applications, together with the software and operating systems they run on. Then determine what you are looking to achieve, setting performance parameters in such general areas as speed, scalability, manageability and data security, and in business-specific issues, such as the restrictions on your back-up window.
Having established where you are, and where you want to be, then look for products which will work in your environment. This means going beyond spec sheets and performance claims and looking closely at proof of concept and case studies of similar installations.
You need to look at four main components for a SAN, starting with the Host Bus Adapter (HBA). Depending on the operating system used in your server environment, the HBA will be recognised at a driver level, the BIOS of the card, or both.
If you are working in a mixed environment, implementation will be different for each operating system a straightforward and well-documented process of bringing up a menu or text file and making the changes required. For example, with Linux, it will be necessary to modify files in order for the HBA to read non-consecutive LUNs, whereas with Windows this can be achieved straight out of the box.
In the case where two HBAs are required in each server, with dual pathing needed in case one fails, then again, implementation will differ depending on the operating system. QLogics HBAs, for example, work differently in a Windows and Windows cluster environment, and installing Novell will be different again.
So, here as elsewhere, the choice of HBA will depend on an understanding of your environment and the best way to achieve your objectives.
Switches and zoning
The second step is to install the required switches and here zoning can become important, especially with port isolation. The standard configuration within a switch includes an RSCN message each time a new device is plugged in or a server is powered off.
This essential notification to all other ports does not affect most applications but the millisecond interruption can cause glitches in video streaming and stop a backup device working. Here, the switch software management will need to be configured to apply RSCN suppression to the relevant port to prevent notification.
Zoning options include virtual fabric, worldwide name or port zoning: the choice here is key and will in major part be dependent on the needs of your disaster recovery plan and the level of data security required.
In meshing multiple switches in more complex environments, it is also essential to ensure the right multiple connections, to avoid latency as a result of bottlenecks and guarantee required performance figures.
The third key component is the RAID. This can be set up off-line, but you must ensure all the switch zoning is in place before you plug in the device. In moving from a DAS to a centralised SAN infrastructure, once again it is important to select a specific RAID level for the server in order to achieve a given performance.
As more devices are shared via multiple servers, the more aware you need to be as to how you segregate data within the RAID device. There are several ways of configuring your RAID: in creating a pool of disks within an array, which is then broken into LUNs, you can either have each LUN with its own dedicated disks or slice the RAID array into separate LUNs, with multiple LUNs sharing the same disks.
Choice will, in part, depend on your need for outright performance and how you plan to grow data volumes within the RAID. It is possible to adopt both strategies by mixing and matching, so careful planning is required to select the right configuration to achieve what you need.
At this stage, any operating system changes such as clustering with path failover - need to be addressed. Setting up the software incorrectly may not be immediately apparent but will become obvious at the point when an individual device fails.
The fourth, and final, key component is tape drive selection. This will involve a number of considerations, including how you plan to achieve the performance required, scalability and the backup strategy to be adopted. As the market moves towards virtual tape and Disk-To-Disk-To-Tape (D2D2T), it is important not to oversize your tape solution and to ensure that the backup software, tape device, and any near-line disk backup, are fully compatible and can scale as you grow.
As the infrastructure becomes more complex, with multiple operating systems, servers and backup software, the need for security becomes increasingly important. Similarly, library partitioning is likely to be required to ensure backup is completed within a restricted time window, as will an additional tape drive dedicated to each operating system (N+1) as back up in the event of a tape drive failing.
Until recently, SANs have typically been installed with fibre channel, but there is now a cheaper alternative in iSCSI. Whether this is suitable will depend on your application and needs, for the devices themselves work very similarly in both cases. It is easier to implement a disaster recovery site using iSCSI for example, but the data is less secure, so you need to decide what is important. As with so many decisions in implementing your SAN, it is a balancing act.
Paul Hickingbotham, 33, senior storage consultant, provides pre-sales support for the solutions team, creating leads and designing customer solutions. He joined Hammer in 2000 to provide general external pre-sales support, before moving to concentrate on fibre channel. Before joining Hammer, as technical manager for ADC Technology, he specialised in RAID solutions and his earlier career included software experience at both end-user and reseller level. He has completed training with the Fibre Channel Association, is a Ziatec CXE certified engineer and has Sony DTF accreditation.