Researchers at Carnegie Mellon University and Intel Labs have built an experimental energy-efficient computing cluster that combines flash memory and the sort of processors used in netbooks. Their name for it? Fast Array of Wimpy Nodes (FAWN).
The project, which the researchers detail in a paper presented at the Association for Computing Machinery's Symposium on Operating Systems Principles, provides massively parallel access to data by balancing I/O and computational resources. The researchers say it can handle 10 to 100 times as many queries, while using the same amount of energy, as a typical disk-based cluster.
The FAWN network boasts 21 nodes, featuring processors such as Intel's Atom and 4GB compact flash cards, and at peak utilisation can drain less power than a 100 watt light bulb. Each node can serve up to 1,300 256-byte queries per second, according to the paper.
"FAWN systems can't replace all of the servers in a data centre, but they work really well for key value storage systems, which need to access relatively small bits of information quickly," said David Andersen, CMU assistant professor of computer science, in a statement. He leads the project along with Michael Kaminsky, senior research scientist at Intel Labs Pittsburgh.
Such energy efficient data centre technology is important, given that organisations are squeezing more and more hardware into data centers and trying to keep energy usage and costs under control while doing so.
The National Science Foundation, along with Google, Intel and Network Appliance, are backing the project financially.
The FAWN paper, recognised as the best paper at the ACM event, is but one of many projects discussed at the symposium.
Other eye catchers include:
• RouteBricks: Exploiting Parallelism to Scale Software Routers, largely an Intel Research project
• Automatically Patching Errors in Deployed Software, by researchers from VMware, MIT and other organisations
• Fabric: A Platform for Secure Distributed Computation and Storage, out of Cornell University