When we need to book a flight these days, most of us go online and buy a ticket with a few clicks. The era when we had to drive to a travel agent, discuss our plans and pick up a physical ticket sounds like an inefficient way to do business.

Yet, this is how many companies still handle their data processing. They place their data far away from the processor behind several layers of barriers that result in slow and unpredictable access to information.

Recently, enterprises have been integrating new technologies like NAND flash to boost speed and decrease latency, but many are applying old legacy methods for a technology that is much better suited to a fresh approach. This can cause an unsuspecting customer to get far less out of flash than they hoped.

Solid-state storage offerings that integrate NAND flash like traditional disk systems place data far away from the CPU, often behind an outdated storage controller, which is like having flight data online but you still have to drive to the travel agent to book and buy a ticket. No matter how fast the NAND is, this setup often creates latency, so the end application sees only small improvements in actual throughput.

Enterprises can position themselves for success by keeping a few pointers in mind.

The pain of the past: disk drive acrobatics

Most people are aware of the speed limitations of disk drives compared to CPUs. Less well known are the acrobatics administrators go through to configure disk drives for performance. From buying expensive Fibre Channel disk drives, configuring them in complex schemes that use only a fraction of the drive platter to boost performance, this means adding stacks of disks with largely unused capacity that administrators must monitor for failure. This doesn’t even begin to address the costs of power, cooling and space for these systems.

But even with these acrobatics, disks can barely meet required performance levels due to the distance of external disk storage systems from the CPU. While CPUs and memory operate in microseconds, access to external disk-based systems happens in milliseconds, which is a thousand fold difference. Even when disk systems can pull data quickly, getting the data to and from the CPU has a long latency delay, causing CPUs to spend a lot of time waiting for data. This negatively impacts application and database performance.

Flash in a disk-based architecture: still lots of latency

If you consider flash as a new medium, then implementing it the same way you implemented previous media technologies like tape and disk drives is only a small part of the way forward.

Flash removes the part of the latency bottleneck caused by slow rotating spinning disk drives, but on its own, it does nothing to fix the delay in getting process-critical data to and from the CPU.

Storing data in a flash array puts process-critical data on the wrong side of the storage channel, far away from the server CPU that processes application and database requests.

Not only are performance gains minimal, but, in addition to adding more hardware, organisations must also implement complex and costly storage infrastructure, including host bus adapters, switches and monolithic arrays.

But most importantly, these architectures retain the traditional implementations of storage, as well as RAID, and SATA/SAS controllers—all optimised to spinning drives, not NAND flash silicon.

Making progress: location, location, location - and the right controllers

Solid-state vendors are recognising that the key to realising improved performance is putting flash close to the CPU, so they are creating devices that use PCIe natively, without the inhibitors of outdated translation layers.

However, some of these devices limit performance by placing the flash under the control of legacy storage implementations of SATA or SAS controllers that were, you guessed it, initially designed for disks. These were never intended to operate with NAND flash and so it’s not surprising that they do not facilitate the capabilities of NAND flash. It’s like putting a Mercedes engine into a 25-year-old clunker.

RAID controllers present the same pitfalls. Initially designed to aggregate the performance of multiple disks and protect from individual disk failures, conventional RAID mechanisms work well for spinning media. However, these mechanisms add too much latency for NAND flash and do not work well for this medium.

The ideal way to integrate flash in a server is referred to as native PCIe access. This puts legacy storage technologies aside, and a new cut-through architecture provides the most direct, accessible and lowest latency path between the NAND flash and the host memory.

As CPUs never read information from storage, everything must pass through system memory first. To assist in the process, native PCIe NAND flash devices present storage to the application or database like a disk drive, but they actually deliver the data to the system memory via Direct Memory Access (DMA). This guarantees the lowest latency transactions between data storage and CPU processing.

By offering server CPUs unrestricted access to flash, native PCIe implementations can increase application and database performance up to 10X. It’s important to note that the difference between this cut-through architecture and other solid-state approaches is the improvement to application throughput and not just raw media performance. Data placement in the server without legacy storage protocols allows applications to fully utilise server CPUs by not forcing them to wait for slow access to data.

Advanced architecture for advanced performance

Flash technology offers incredible potential for speeding up enterprise applications and databases. But when flash is treated as just a new kind of disk drive, businesses miss the mark in delivering on its full capabilities.

To deliver on flash’s promise for enterprise, native PCIe approaches leave the legacy disk protocols behind and place process-critical data near the CPU to minimise latency.


By Gary Orenstein, vice president of products at Fusion-io. Follow Gary on twitter @garyorenstein and Fusion-io via @fusionioUK.