Sun Microsystems is preparing to launch its StorageTek 5800 content-aware storage system. Its NAB (National Association of Broadcasters) 2006 show card lists the StorageTek 5800, describing its attributes as:-
- horizontally scalable
- low-cost clustered storage
- extensive metadata and fast search
- self-healing, self-balancing
- dynamically-reduced management cost.
It should probably and more correctly be called the Sun StorEdge 5800.
What appears to have happened is that Sun has pulled the product from the show and will announce it next week at its next scheduled quarterly product announcement event. It will come hot on the heels of McNealy's stepping down as CEO and Jonathan Schwartz' installation to that position.
This will be Sun's first 'home-grown' and major storage product announcement since acquiring StorageTek last year. IT will demonstrate that Sun is an equal partner in the product sense to the StorageTek team now esconced in Sun's Data Management Group.
Honeycomb has had a quite long and public gestation with Sun mentioning it in 2005. Dick Sillman's team, which developed it, started work in 2003, eventually receiving a chairman's award for their project two and a half years later. It was handed over from the development labs to Sun's network storage business in December, 2004. It became the responsibility of Fidelma Russo, now VP of network storage product development.
Originally it was forecast to be revenue-earning by the end of 2005. However delays have occurred.
Sillman described Honeycomb as complex software and commodity hardware. He said: "What we do is we take an object and slice it into fragments. Then we have a very clever, patented placement algorithm that distributes the fragments throughout the array in such a way that data can always be regenerated, even if multiple components fail."
This implies that Honeycomb can cope with multiple drive failures, a RAID 6 type situation.
The slicing into fragments is reminiscent of EMC's Centera and other fixed-content type stores, also of HP's RISS. There is no mention of de-duping or single instance storage but it would be surprising if that facility was not present.
Honeycomb is resilient and scalable. Sillman said: "Honeycomb software always knows which processors and disks are healthy and which are no longer responding. Then, as new tasks come in, it assigns those tasks in parallel - which makes searching and retrieving files from giant data stores incredibly quick." He also said: "The software says, 'Hey, look, more capacity. Cool. I know what to do with this."
"With Honeycomb, you take it out of the box, plug it in, and it's ready to store stuff. There's a lot less installing of this and configuring of that. And when you run out of space, you order up some more stuff from Sun, then you turn that stuff on and they all find each other."
The task assignation and general operation is described thus: "It's all about clustered replication, which means tiles and tiles of computers that know how to share work."
Computing processing power has been built-in along with the drives so that Honeycomb can be viewed as a simple storage appliance built from three components: power supply; disk drives; and multiple Opteron processors. There is no single point of failure. He compares Honeycomb's resilience to Christmas tree decorations: "If you lose a couple of bulbs over the course of the season, it still looks like a Christmas tree."
The idea is that Honeycomb does information searching on the appliance, using metadata generated from the stored objects. Sillman said the approach was to ask: "'What if we embedded a bunch of horsepower right alongside the drives?' So, instead of trying to move large data sets to supercomputers, you can actually crunch the data from within the storage array."
A side benefit of all the processing power is much reduced rebuild time if components fail: "As disk drives get bigger and bigger, now you're losing 500 gigabytes with one drive failure. So when there has been a failure and now it's time to take the parity fragments and regenerate data on the fly, we're able to show a massive improvement. By using multiple Opteron processors to work this problem in parallel, we can rebuild in an hour what used to take eight hours."
Sun will position Honeycomb for the compliance and legal discovery markets. In many ways the rise of compliance is a godsend for the company. The product will be compared and have to stand comparison with EMC's Centera, HP's RISS and also NetApp's Kazeon-influenced product strategy.
The StorageTek 5800 is a kind of super-NAS. We might expect it to have CIFS and NFS protocol support and also an API so that Sun development partners can build applications around it.
It is unclear whether Sun will position it as a fixed-content or reference-type information store or build it into a more general-purpose file store. The bigger theoretical market is the general-purpose one with a single logical archive of information to be searched.
For Schwartz the trick will be to get money from honey and use it to help re-build Sun's finances. Expect lots of buzzing noises as Sun's salesreps fly around trying to pollinate customers with the StorageTek - or StorEdge - 5800 story.