Axiom is a multi-device storage product that attempts to provide high-availability disk subsystems without the need for a rocket scientist as system admin. There are three rather interestingly named components to the system: the Pilot, the Slammer and the Brick.

We’ll start with the Brick – or the BRX500 as the front panel more calmly terms it. This is a 2U unit containing 13 S-ATA hard disks; the disks are split into two separate collections of six each, with the remaining one inserted from the back and acting as hot spare for either array. Each of the arrays of six disks is preconfigured as a RAID5 volume – though in reality the user is completely oblivious to this fact – and Fibre Channel connectors provide the potential to link to the rest of the system – most notably the Bricks. There are currently two flavours of Brick – one with 13 250GB disks and another with 13 500GB ones – and the company plans to introduce Fibre Channel disks later in the year.

Next is the Slammer, or SLM500. This is a 4U unit which deals with the allocation of ones and zeroes to arrays within the Bricks. Each Slammer contains two Control Units (“CUs”), along with a pair of PSUs and a pair of fans, with the two halves kept separate – in fact all the CUs share is the box they reside in. Again, the Slammer has a bundle of FC connections for linking it to external devices – not just Bricks but also host computers (either directly or, more probably, via a FC switch infrastructure) or external tape drives if you want to provide out-of-band backup facilities (the latter requires you to purchase an add-on daughtercard for the CU which has the necessary ports). Each Slammer can provide either NAS or SAN services – though you can’t do both at once on a given unit.

The Pilot is the control and administration centre of the system. It takes the form of a pair of Linux-based servers which host the necessary server-end files for the browser-based configuration system. Although the Pilots handle the configuration and administration of the system, and are clustered for resilience, the day-to-day running of the system will carry on even if both Pilots die – a copy of the configuration is held on every device in the system, so all you lose in the event of a double pilot catastrophe is the inability to make changes.

The system is shipped such that a non-specialist can set it up. The cables and ports are both colour-coded and labelled with device and port numbers – which means that anyone with a brain can figure out that, say, the green cable for segment 1 on device 2 plugs into the green port for segment 1 on device 2. In a setup with up to 32 Bricks, a many-to-many connection approach provides resilience against device failure. Because the RAID setup of the Bricks is in-built and preconfigured, there’s no drive formatting or RAID configuration to do – everything is set up via the GUI. When something goes wrong, the alerting system has the usual list of capabilities (on-screen alerts plus email, OpenView or SNMP prompting) and as well as the GUI giving an excellent explanation of what’s broken, you can do handy stuff like having it flash the LEDs on the front and back panel of both the faulty device and the faulty component within that device. (Oh, and if the problem is power-related, you can tell it to flash everything that’s working properly – so the thing to swap out is the thing that’s not flashing!).

The Web GUI is clear and concise, and split into sensible chunks. To create a new NAS volume or SAN LUN, you just go to the appropriate screen and select “Create”. You’re asked for the obvious stuff like how big the volume should be and whether it’s to be allowed to grow automatically (and if so, in what increments). Because the system is designed to be approachable for non-specialists, quality-of-service parameters are split into four relative groups, from High at the top to Archive at the bottom; although “high” priority stuff will be organised onto preferable tracks on the disks, which will give a slight performance premium, the real difference comes in the queueing engine within the Slammer – each level of priority takes precedence over those below it. The NAS and SAN configuration screens allow you to define security parameters – so you can tell the SAN side which servers are allowed to see each LUN, and quite sensibly the NAS stuff can integrate with your AD or NIS directory service for access control.

Incidentally, there’s a handy “simulation” feature that you can use to check what impact a new volume would have on the existing ones – you tell it what you’re intending to create and it’ll estimate the effect it’ll have on each of the entities that you’ve already defined.

In both the NAS and SAN setup screens, you can define the level of resilience you want for each volume. There are three options: Standard, Double and Triple. Although “Standard” sounds like it might mean “single”, and use just one disk set, this isn’t really the case – it actually uses four. So if you create a Standard volume, it’ll be held on four six-disk sets – two each on two Bricks – which will be plenty for many uses since in reality you’re mirroring data between multiple RAID5 arrays. The Double setting makes the system store everything on eight disk sets, while Triple uses twelve sets, over six Bricks. There’s only one tiny issue with resilience on volumes, by the way. Although a volume is presented by both sides of a Slammer, so you can lose a CU without loss of service, in the (admittedly unlikely) case of both CUs in a Slammer dying, a quick manual intervention is required to tell another Slammer to serve that volume. This will be addressed in a future version, but isn’t really a massive problem as there’s no data loss and you’d be unlucky to lose both CUs in a Slammer.

The licensing for the system is very simple, since you only pay once for the licence (effectively, the Pilot and its software). Adding Slammers or Bricks is just a case of buying the kit – there are no more user licences to buy until you hit the 64-Brick limit and have to start a new setup with a new Pilot pair. The only chargeable software option is the Volume Copy facility: the system includes the ability to make auto-snapshots and deal with disk-to-disk or disk-to-tape backups, but if you want to replicate entire volumes, you’ll need to spend a few quid more for the licence (unless you want to copy a volume to an “Archive” one – you get this thrown in for free).

Some storage specialists will be sceptical of this kind of kit because you can’t configure it to death. In my opinion, though, this is a massive benefit: the setup will be appropriate for the vast majority of storage installations, so it’s great to have the complicated, esoteric, rarely-fiddled-with stuff kept out of the way behind the scenes. As well as prioritising volumes relative to each other, you can configure some other basic parameters such as the approximate read/write ratio, file access pattern (sequential, random or a mix of the two) and typical read/write size – and this kind of level of configuration ought to suffice in most cases.

The Axiom suite is a high-availability storage system that any decent system manager should be able to comprehend with no need for long training courses or expensive vendor intervention, and is well worth a look for anyone charged with building a corporate storage subsystem.

OUR VERDICT

Pillar's marketing people claim that the ability to add extra Bricks and Slammers without paying for more user licences makes their pricing comparable with other vendors for the initial purchase, but means it ramps up less when the time comes to expand.