One of the major selling points highlighted by manufacturers is the size to which environments can be pushed. Both EMC and Hitachi data systems claim capacities of over 100TB for their largest Enterprise class disk subsystems. But is this a good thing?

There is no doubt that storage demands have been increasing exponentially over recent years. Disk and SAN vendors have met this challenge and deliver an incredible amount of storage in a very small footprint. There comes a time, however when you can have too much of a good thing and shouldn't scale any further.

There are very good reasons for making such a bold statement. At first it may seem a bit odd to think in this way, however, consider when things go wrong - because they inevitably do. With a small environment, you have statistically less chance of experiencing hardware and software bugs. As you increase the amount of equipment in your configuration, you are likely to encounter problems more frequently, especially as you increase the diversity of platforms and storage supported. When these problems do arise, you will feel the impact even more acutely.

Logistical headache
Here's an example; we currently have a SAN environment that consists of McData 64-port director-class switches. We have implemented a core-edge configuration and support 180 hosts per site. We have recently encountered a problem on one of our core switches. Due to the concentration of hosts that go through this single director, and despite the fact we have a dual fabric infrastructure for redundancy, taking this core switch offline for any kind of maintenance or problem resolution is almost impossible. In fact, with 64 fully populated ports, our current discussions on de-installing the director and replacing it with a new piece of hardware pose severe logistical headaches, simply due to managing the physical de-cabling and re-cabling alone.

At some stage, you have to call a halt to creating a single monolithic infrastructure and decide that SANs and storage should be split into discrete subsets that serve specific purposes. This in itself poses two problems.

First, the concept of a highly scalable architecture is sold as a desirable feature by vendors and their sales force, looking to make their product the best value for money purchase for the customer. In fairness this is not a bad thing. The people who hold the purse strings and have to pay for investment in this technology are keen to realise the savings this scalability can achieve. However, convincing the finance director you need to create yet another separate SAN can be a tough prospect.

Second, creating separate distinct infrastructures increases management overhead. The issues posed are obvious:

• Where should new hosts be accommodated and on which SAN?
• How will we manage BAU growth on the infrastructures that are not being expanded?
• How will we refresh technology in the future?
• What management tools will meet our needs?
• How can we maintain accurate configuration documentation?
• What impact will separate environments have on disaster recovery?

These are all valid points and deserve consideration when evaluating the time is right to create separate environments. So, how should you make that critical decision? Here are a few questions to ask:

• In a SAN environment, am I wasting more inter-switch links than necessary?
• What would be the impact if the SAN or disk subsystem was lost?
• How easy or difficult would it be to perform an upgrade on the entire SAN fabric?
• What level of outage can I achieve if I need to do maintenance?
• Can I split my users into logical business areas?
• Can my management tools cope with the size of my environment?

Only you can choose the options that are right for your environment. Whatever you choose the options are clear. Expand your environment only to the point that you think you can safely manage it. After that, it is time to start again on a new infrastructure.