Neuromorphic computing, as the name implies, aims to emulate the human brain's neural structure for computation. It's a relatively recent idea and one of the radical takes on contemporary computer architectures today. Work on it has been gaining traction, and promising results have come up; as recently as June this year, a neuromorphic device was used to recreate a gray-scale image of Captain America’s shield.
Alongside other notable announcements at Intel Labs Day 2020, the firm also gave us an update on the progress with its Intel Neuromorphic Research Community (INRC). The aim of the INRC is to expand the applications of neuromorphic computing in business use cases. This consortium, which originally came together in 2018 and includes some Fortune 500 and government members, has now been expanded to over 100 companies and academics with new additions like Lenovo, Logitech, Mercedes-Benz, and Prophesee. Moreover, Intel also highlighted some research results coming out of the INRC computed on the company’s neuromorphic research test chip, Loihi, at the virtual conference.
Researchers showcased two state-of-the-art neuromorphic robotics demonstrations. In the first demonstration by Intel and ETH Zurich, Loihi was seen adaptively controlling a horizon-tracking drone platform. It achieved closed-loop speeds up to 20kHz with 200µs of visual processing latency, a 1,000x gain in combined efficiency and speed compared to traditional solutions. In the second demonstration, the Italian Institute of Technology and Intel showed the operation of multiple cognitive functions like object recognition, spatial awareness, and real-time decision-making, all running together on Loihi in IIT’s iCub robot platform.
Other updates highlighted in the conference include:
- Voice Command Recognition: Accenture tested the ability to recognize voice commands on Intel’s Loihi chip versus a standard graphics processing unit (GPU) and found Loihi not only achieved similar accuracy, it was up to 1,000 times more energy efficient and responded up to 200 milliseconds faster. Through the INRC, Mercedes-Benz is exploring how these results could apply to real-world use cases, such as adding new voice interaction commands to vehicles.
- Gesture Recognition: Traditional AI works well for crunching big data and recognizing patterns across thousands of examples, but it has a hard time learning subtle differences that change from person to person – like the gestures we use to communicate. Accenture and INRC partners are demonstrating tangible progress for utilizing Loihi’s self-learning capabilities to quickly learn and recognize individualized gestures. Processing input from a neuromorphic camera, Loihi can learn new gestures in just a few exposures. This could be applied to a variety of uses cases, such as interacting with smart products in the home or touchless displays in public spaces.
- Image Retrieval: Researchers from the retail industry evaluated Loihi for image-based product search applications. They found Loihi could generate image feature vectors over 3 times more energy efficiently than conventional central processing unit (CPU) and GPU solutions while maintaining the same level of accuracy. This work complements similarity search results from Intel’s Pohoiki Springs neuromorphic research system, published earlier this year, which showed Loihi’s ability to search feature vectors in million-image databases 24 times faster and with 30 times lower energy than a CPU.
- Optimization and Search: Intel and its partners have discovered that Loihi can solve optimization and search problems that over 1,000 times more efficiently and 100 times faster compared to traditional CPUs. Optimization problems – such as constraint satisfaction – provide potential value at the edge, such as enabling drones to plan and make complex navigation decisions in real time. The same problem type could also be scaled for complex data center workloads, assisting with tasks like train scheduling and logistics optimization.
- Robotics: Rutgers and TU Delft researchers published new demonstrations of robotic navigation and micro-drone control applications running on Loihi. TU Delft’s drone performed optic flow landings with an evolved, 35-neuron spiking network running at frequencies over 250 kilohertz. Rutgers found its Loihi solutions to require 75 times lower power than conventional mobile GPU implementations, without any loss in performance. In work published at the 2020 Conference on Robot Learning in November, Rutgers researchers found Loihi could successfully learn numerous OpenAI Gym tasks with equivalent high accuracy as a deep actor network, with 140 times lower energy consumption compared to a mobile GPU solution.
Moving forward, Intel will be integrating the takeaways accrued from experiments over the last couple of years into the development of the second generation of its Loihi neuromorphic chip. While the technical details of the next-gen chip are still nebulous, Intel says that it is on the horizon and "will be coming soon".