Following up on work commissioned by the U.S. Defense Advanced Research Projects Agency (DARPA), IBM has developed a programming paradigm, and associated simulator and basic software library, for its experimental SyNAPSE processor.
The work suggests the processors could be used for extremely low-power yet computationally powerful sensor systems.
"Our end goal is to create a brain in a box," said Dharmendra Modha, and IBM Research senior manager who is the principal investigator for the project. With this technology, systems could one day be built that would "mimic the brain's ability for perception, action and cognition," he said.
The work is a continuation of a DARPA project to design a system that replicates the way a human processes information.
DARPA's original goal for the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project was to design computational devices comprised of billions of tiny processor cores packed into the volume of a two-liter bottle that used less energy than a light bulb.
At The International Joint Conference on Neural Networks this week in Dallas, IBM is demonstrating the third phase of the project, which thus far DARPA has funded with approximately US$53 million. IBM is working with Cornell University and iniLabs, and has collaborated with six other universities and a number of government supercomputing facilities as well.
The chips represent a radical break in design from today's von Neumann architecture of computing, in that computations are quickly made in a serial fashion. In contrast, this model works with multiple low-power processor cores working in parallel.
This chip architecture replicates how the human brain works, in that each "neurosynaptic core" has its own memory ("synapses"), a processor ("neuron"), and communication conduit ("axons"), which all operate together in an event-driven fashion, according to IBM. By working together, these cores could provide nuanced pattern recognition and other sensing capabilities, in much the same way a brain does.
IBM is unveiling a software ecosystem at the conference that can be used with these processors.
In particular, IBM is unveiling a simulator that can run a virtual network of neurosynaptic cores for testing and research purposes. IBM is also introducing a neuron model to represent how the processor core operates, or how it senses, remembers and acts upon a variety of input.
The company is also showing off a programming model based on reusable and stackable building blocks, called corelets. The corelet acts as the atomic unit of this neural computing model, in which inner workings of a corelet are hidden and the programmer knows only of its inputs and outputs. "The programmer only sees wires going in and wires coming out," Modha said.
Each corelet is in fact a tiny neural network itself and can be combined with other corelets to build functionality. "One can compose complex algorithms and applications by combining boxes hierarchically," Modha said.
IBM researchers have already composed 150 corelets, which have been captured in a program library. The company has also developed a teaching curriculum, application library and prototype designs for the new architecture.
Modha said that this style of computing is not designed to replace today's computers. "Today's computers are great for analytical processing, symbolic processing and number crunching," he said.
Rather, SyNAPSE chips could be used to one day build complementary devices that would excel at low-power sensing at the edges of a network. Unlike today's sensors, SyNAPSE systems could do a lot of the initial computations needed to recognize patterns.
"We are populating the Earth and space with sensors, cameras and microphones, and then moving [the data they create] to the data center. Data is going to computation. But with a low power and brain-like capabilities of our chips, and the ability to do pattern recognition, we can move intelligent computation back to the edge," Modha said. "The sensor becomes the computer."
For instance, these smart sensors could be used to build glasses for the visually impaired, which would collect sensory information and translate it into a form that the user could understand. Or jellyfish-like sensory buoys could be built that would float on the ocean and collect a wide range of information, such as temperature, air pressure, humidity, and also conduct duties such as tsunami monitoring and shipping lane enforcement, Modha said.