Scientists tapped into neuromorphic computing to preserve robots learn about new objects after they are deployed.
For the uninitiated, neuromorphic computing replicates the neural structure of the human brain to create algorithms that can deal with the uncertainties of the natural world.
Intel Labs has developed one of the most notable architectures in the field: the loihi neuromorphic chip.
Loihi is made up of about 130,000 artificial neurons, which send information to each other via a “spiking” neural network (SNN). The chips had already controlled a series of systemsfrom a smart artificial skin to an electronic “nose” that recognizes odors emitted by explosives.
Intel Labs unveiled a new application this week. The research unit worked together with the Italian Institute of Technology and the Technical University of Munich to deploy Loihi in a new approach to continuous learning for robotics.
Interactive learning
The method targets systems that interact with limitless environments, such as future robotic assistants for healthcare and manufacturing.
Existing deep neural networks may have difficulty learning objects in these scenarios, as they require extensive, well-prepared training data — and careful retraining on new objects they encounter. The new neuromorphic approach aims to overcome these limitations.
The researchers first implemented an SNN on Loihi. this architecture locates learning to a single layer of plastic synapses. It also takes into account different representations of objects by adding new neurons on demand.
As a result, the learning process unfolds autonomously in interaction with the user.
Neuromorphic Simulations
The team tested their approach in a simulated 3D environment. In this arrangement, the robot actively detects objects by moving an event-based camera that acts as eyes.
The camera’s sensor “sees” objects in a way that is inspired by tiny fixating eye movements called “microsaccades.” If the object it is viewing is new, the SNN representation is learned or updated. If the object is known, the network recognizes it and gives the user feedback.
The team says their method required up to 175 times less energy to provide comparable or better speed and accuracy than conventional methods that run on a CPU.
They now have to test their algorithm in the real world with real robots.
“Our goal is to apply similar capabilities to future robots working in interactive environments so that they can adapt to the unforeseen and work more naturally alongside humans,” Yulia Sandamirskaya, senior author of the study, said in a statement.
Their study, which was Awarded “Best Paper” at this year’s International Conference on Neuromorphic Systems (ICONS), can be read here.