Not all storage area networks (SANs) are created equal. A German academic customer of Dell is using its AX100-based SAN as a kind of temporary holding store rather than as a high-speed consolidated disk store for servers. These servers thunder through their application loads using their own direct-attached storage and it's the results that are consolidated onto the SAN.

The background to this is numerical simulation at the University of Bonn. Instead of running expensive, complex and time-consuming experiments to investigate what turbulence occurs near sluice gates in a water course, or how droplets form in a rocket fuel tank, the situation can be simulated and the numbers processed to work out what happens and when.

The university's mathamaticians also analyse option price information in the finanacial arena and run data mining applications.

These numerical simulations and allied applications are processor-intensive, very much so. Traditionally a parallel processing approach has been used with clusters of Linux/X86 commodity servers working on the task. That was what happened at Bonn. In 1999 it implemented a clustered set of 72 X86 desktops running Linux.

That has now been changed. Why? The University of Bonn had to update this original cluster because the cooling system couldn't cope with the load and failed repeatedly. Also the cluster server processors' performance couldn't keep up with the growing processing burden of the complex algorithms being developed.

So now it has implemented a clustered set of 128 Dell PowerEdge 1850 servers, ones with dual processors, running Linux. All the servers are integrated into a single network accessed by a dual processor Dell 2800 server. So what's new, when all it seems to have done is to have updated its cluster processors in both power and numbers?

From the storage angle, interestingly, each server has its own direct-attached storage, yet there is also a 3TB SAN using a Dell/EMC AX100 array with seriaL ATA (SATA) drives - hardly what we think of for a SAN which typically uses faster drives and a much larger capacity configuration. Twelve SATA drives in a box is not a typical SAN.

What is going on? Each of the cluster servers runs its part of a numerical simulation and stores its results on its own DAS. Each cluster server's set of interim results is then transferred to the SAN to hold the complete results.

The SAN is not being used to hold data for processing by the cluster processing servers and feed them the disk blocks of data they need at high speed, nor to store their interim results. Instead it is being used as a kind of staging area to collect the individual servers' results together.

In information lifecycle management (ILM) terms the cluster servers' DAS is the primary online data store with the SAN being a secondary tier. We might almost class it as nearline storage.

Despite this being a low-powered SAN setup the new Bonn cluster is rated number 428 among the world's fastest supercomputers, delivering 1,269 gigaflops/second. (A gigaflop is one billion floating point operations.) This is forty times better than the original cluster of 72 Linux desktops.

So if you are an organisation that doesn't have a huge amount of budget, yet you need to run heavily processor-dependent applications needing fast data access and some storage plaform to consolidate the results on think inexpensive Dell SAN. Like the straightforward Ronseal ad a Dell AX100 SAN does what it says on the can.