Brain-Inspired Chips Come to the Edge

Subscribe To Download This Insight

1Q 2022 | IN-6403

As Artificial Intelligence continues to evolve, the brain is the next source of inspiration for more effective processing technology.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The Advent of Neuromorphic Computing

NEWS


As ABI Research has discussed recently (IN-6324), neuromorphic computing is making an entrance into commercial Artificial Intelligence (AI) applications, including edge devices. The potential of brain-inspired chips for edge computing is not in itself entirely surprising. Neuromorphic chips—System-on-Chips in which memory storage and processing units are not separate components of the architecture—are ideally suited for the edge, as this technology provides ultralow power consumption and low latency responses. These features are particularly useful for sensory applications such as computer vision, which are typically conducted at the edge, and a number of vendors are entering this very space. GrAI Matter Labs, with its GrAI VIP (Vision Inference Processor) System-on-Chip, offers a compact design for AI-enabled cameras as well as for sensor applications for robotics (for instance, for intelligent grasping devices).

The Advantages of Neuromorphic Processing at the Edge

IMPACT


The main aim of these developments is effectively to place computing chips as close to sensors as possible. This is a natural development in itself and considering the ever-increasing data and processing requirements of Convolutional Neural Networks (CNNs) algorithms—the standard computing model to analyze visual information—neuromorphic chips are likely to prove an attractive proposition. CNNs combine large computational complexity and massive parameter counts, which results in inferences that are fairly slow and power hungry, and thus only possible on powerful processing units. The point of neuromorphic chips is to bring CNN-based computer vision to the edge, where it is usually most useful in industry.

The GrAI VIP chip, while smaller than a quarter coin, consumes very little energy and can perform at a high performance. Judged on a ResNet-50 model, for instance, a CNN fifty layers deep, the GrAI chip delivers inferences with a latency under one millisecond and yet its power consumption does not raise above 0.5 W, for the 5 W to 10 W consumption typical of other chips in the market (see here for a comparison of the core and memory requirements of various CNNs, including how the GrAI VIP chip performs with these CNNs). Based on NeuronFlow technology, the VIP chip is a dual data processing unit that contains system interfaces (for high-speed access to host servers) and camera interfaces (for high-speed access to cameras). It thus combines neuromorphic engineering with dataflow computation, which is a computational model that is triggered by the arrival of data in the form of events. This in practice means that while processing an image, the VIP chip will keep in memory the initial processing of the image and from then on will only focus on those features of the image that change in time, thus accelerating processing speed and reducing energy consumption (this is usually called sparsity computing).

Is Neuromorphic Processing the Next Disruptor in Artificial Intelligence?

RECOMMENDATIONS


Part of the allure of brain-inspired chips is the belief that they may herald in a new age of AI systems—systems that are closer in nature to what the human brain does, thus implementing a life-like Artificial Intelligence. This belief can be realized in two different but related ways. On one hand, there are neuromorphic chips, the technology discussed here, which aim to mimic the architecture of the nervous system by blurring the distinction between memory and processing. This is achieved by the use of artificial silicon neurons working in parallel, and in this sense, neuromorphic chips constitute a development at the level of the transistor. According to this design, networks of neurons communicate with other networks of neurons and thus information processing takes place throughout the overall collection of neurons. This technology is already earning its keep, as demonstrated by its superior performance in terms of latency responses and energy consumption, and ABI Research expects neuromorphic chips to feature in all sorts of edge applications going forward.

On the other hand, there is the actual computational model executed in neuromorphic chips. Most of the brain-like chips now available are capable of implementing neural network algorithms, including CNNs, in line with what is the most established AI technique in industry, but the more congruous computational model for neuromorphic chips is what may be termed neuromorphic processing. According to this model, based on Spiking Neural Networks rather than on Deep Neural Networks, networks of neurons communicate with other networks through bursts of activity (spikes), and thus each neuron carries a small cache of information within it. Relatedly, a new partnership has been set up between SynSense and Propresee to create a single-chip, image processor that integrates Prophesee’s sensor technology with SynSense’s neuromorphic processors. The System-on-Chip SynSense and Prophesee envision utilizes this very model, which Prophesee has had significant success with in the automotive and academic sectors. ABI Research believes that significant advancements in AI will go down this route in the future, though whether this will result in a more human-like AI is uncertain and certainly not a given.

 

Services

Companies Mentioned