The Honeycomb Storage System is a radical new approach to managing large collections of reference data. Designed for ultra-low cost, high availability, and simple to install, manage, and grow over time, Honeycomb uses advanced clustering techniques to build large systems from inexpensive components. Even more cost savings are realized with highly automated installation, capacity upgrades, and built-in resource balancing. A "fail-in-place" component strategy allows the system to degrade gracefully until the next periodic refresh occurs.
As storage capacity is added, more Opteron processors join the cluster to work in parallel on search, retrieval, scrubbing, and data rebuild tasks. Parallel processing greatly reduces the "window of vulnerability" and removes the urgency around servicing failed disks. Honeycomb also removes costly object database licenses and dedicated "namespace front-ends" by including an embedded, extensible metadata system that provides flexible, ad hoc search capabilities using RAM-based indexing for high performance retrieval of stored objects.
From the Beginning . . .
Project Honeycomb began 2.5 years ago with the observation that disk costs were plummeting while power consumption and IT management costs would trend ever higher, accompanied by mushrooming demand for cataloging write-once data from digital security cameras, financial record regulations, medical imaging, satellite photography, and health sciences research. After six months of early research and modeling, a rough proof-of-concept prototype at the CTO open house led to Sun Labs funding an advanced development project.
Dick Sillman and his staff quickly assembled a team of twenty researchers with clustering and storage backgrounds who shared a passion for quick decisions, simplicity in design, and delivering innovative products that delight customers. They conducted over sixty customer interviews, recorded hundreds of pages of customer input, and invented new approaches to data integrity, high availability, protocols that scale efficiently, load balancing, and self-healing systems. Over fifteen patents were filed on their object placement algorithms, autonomous cluster management, and low cost load balancing.
Once the detailed design specs were written, reviewed, and cross- checked by senior technical staffers in Sun's DE and Fellow community, rolling iterations of the system were built, unit tested, QA'd, and revised. During this 14-month period, two versions of the system were deployed in field trials to verify and correct initial design assumptions and learn exactly how real customers would use the features. Many conversations with field sales reps, support engineers, and storage practice experts were incorporated into the final feature mix.
What comes next . . .
Honeycomb's groundbreaking advances in low cost, high availability, scalable storage application servers would be most logically delivered to customers by Sun's Network Storage business unit led by Mark Canepa. In December 2004, fifteen of the original development team were transferred from Sun Labs to Network Storage under the direction of Fidelma Russo, VP of Engineering. At this time, the program is ready for Phase II exit in PLC terms, and is moving steadily towards a revenue release target at the end of this year. Honeycomb is a classic example of innovative thinking being applied to real customer problems, and it puts Sun in an excellent position to shake up the storage marketplace with this carefully crafted, superior solution.
There's more on Honeycomb here. Whether Honeycomb will shake up the storage market is a moot point. There are a lot of reference data storage systems emerging. Sun is rapidly re-building its storage product line into a much stronger offering and Honeycomb is part of this. It's going to be very interesting indeed to see what is announced towards the end of the year.