Neuromorphic Computing and the Future of Artificial Intelligence

Subscribe To Download This Insight

4Q 2021 | IN-6324

Neuromorphic chips are predicted to be the basis of new generation, power-efficient supercomputers, replacing high-cost and environmentally damaging alternatives.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The Advent of Commercial Neuromorphic Chips

NEWS


A specter is haunting Artificial Intelligence (AI)—the specter of the environmental costs of Deep Learning (DL). As Neural Networks become ubiquitous in modern AI applications, the gains the industry has seen in applying Deep Neural Networks (DNNs) to solve ever more complex problems increasingly come at a high price. The quantities of computational power and data needed to train networks have increased exponentially. At the current pace of development, this translates into unsustainable amounts of power consumption and carbon emissions in the long run, and therefore alternative methods will be required before long. Recent advances in neuromorphic computing, especially the commercialization of neuromorphic chips by industrial giants such as Intel, with its recently released Loihi 2 boards and related software, hint at a future of ultra-low power but high-performance AI applications that would be ideally suited to cloud and edge technology.

Ameliorating the Environmental Costs of Deep Learning

IMPACT


Neuromorphic computing involves implementing neural-inspired computations in a circuit. Taking the human brain as a model to mimic rather than as inspiration in the way that DNNs have typically been conceptualized, neuromorphic calculations are performed by large numbers of small units (“neurons”) that communicate with each other through bursts of activity called spikes. In the brain, these spikes are brief changes in voltage, which in neuromorphic computing are simulated by using an integer (a number). The overall method involves neurons working in parallel to send spikes to networks of neurons as information processing takes place throughout the entire network. The result is Spiking Neural Networks (SNNs), the AI version of the brain’s neurons and synapses.

This kind of computing can certainly be implemented in conventional computer architectures, as it has been done extensively in research. The more recent novelty has been the possibility of manufacturing viable neuromorphic chips – microprocessors that use electronic circuits to mimic the brain’s architecture—which provides a new computing architecture that can process vast data sets more efficiently than a conventional chip. Typically a System-on-Chip is designed with no strict separation between the processing unit and memory storage, effectively a Network-on-Chip design wherein each neuron carries a small cache of information. This technology represents a rethinking of computer architectures at the transistor level. Indeed, neuromorphic chips use physical artificial neurons made from silicon within an integrated circuit that contains millions of transistors in a single chip.

The goal of manufacturing and commercializing such chips is to provide a computer architecture that is better suited for the kind of intelligent information processing that the brain supports so effortlessly. After all, the human brain processes, responds to, and learns from real-world data with energy consumption in microwatts and response times in milliseconds, a kind of performance that is beyond DNNs. However, the environmental costs of scaling up DNNs can be catastrophic in some cases. According to one estimate, improving some cutting-edge DNNs to reduce error rates without expert input for correction could cost US$100 billion and produce as much carbon emissions as New York City does in a month—an impossibly expensive and clearly unethical proposition.

Neuromorphic processing is meant to mitigate these environmental costs. One of the first neuromorphic chips, TrueNorth, was produced by IBM in 2014. A Network-on-Chip with 5.4 million transistors that gives rise to one million neurons and 256 million “synapses”, TrueNorth is an excellent example of the neuromorphic blueprint: both memory and computation are handled within each core, achieving high power efficiency. TrueNorth was especially apt for image and voice recognition, and thus for edge technology. Something along these lines is also true of Akida, the neuromorphic chip designed by BrainChip, which employs 1.2 million neurons and ten billion synapses to process sensory data. GrAI Matter Labs, for its part, has recently released a brain-inspired chip, the GrAI VIP, an AI edge processor with ultralow latencies and low power processing. But the most impressive results have been obtained by Intel with its second-generation Loihi chip. With its 1.2 million neurons and 120 million synapses per chip, Loihi 2 has been shown to outperform traditional processors by a factor of 100 on energy efficiency in some robotics workloads, an enormous difference.

Prospective Applications of Neuromorphic Chips

RECOMMENDATIONS


It is important to stress that neuromorphic chips are still being developed, and neuromorphic computing itself is more a matter for research than a well-established commercial product. IBM’s TrueNorth was developed in large part due to a research grant from the US Defense Advanced Research Projects Agency. At the same time, the European Union is currently funding the Human Brain Project, which aims to simulate an entire human brain and has greatly advanced research in SNNs, to the tune of hundreds of millions of euros. Intel itself has set up a research group on neuromorphic computing involving industry, academics, and government agencies, called the Intel Neuromorphic Research Community (currently composed of over 140 members).

Nevertheless, the benefits are potentially enormous, especially in power efficiency and its environmental footprint, and some technology is already available. For example, Intel currently provides a single-chip system to researchers for evaluation through its Neuromorphic Research Cloud, and a stackable eight-chip system will soon be available as well. In addition, Intel has released Lava, a community-driven, open-source framework that is meant to provide a standard software framework, a crucial element for the future commercialization of neuromorphic chips. Following the open-source model, Lava is compatible with ROS, Python, C++, OpenCL, etc. Furthermore, this framework is platform-agnostic, thus not tied to Intel chips, and squarely focused on simply providing other developers a way to program neuro-inspired applications to map them into neuromorphic platforms. The uptake of SNNs will be slow in a market in which DNNs are so common, and even though the more significant benefits of neuromorphic processing will require the use of SNNs; Loihi 2 does support DNN implementations and Lava is in fact extensible to PyTorch and TensorFlow.

Much like other endeavors involving open-source software and applications, the active collaboration of industrial giants such as Intel will be crucial for making the neuromorphic market a commercial reality. There is certainly much to look forward to, and ABI Research expects that Intel’s neuromorphic boards will soon be widely available. It will take some time for the neuromorphic market to become mainstream, but with big players such as Intel and IBM so active in this sector, in addition to Samsung and Qualcomm, this is forecasted to increase significantly in the next decade. Samsung, for instance, expanded its neuromorphic division in 2019 and expects the market to have increased by 52% by 2023. ABI Research predicts that neuromorphic chips will be the basis of new generation, power-efficient supercomputers, and a game-changer in edge computing applications requiring ultra-low power and high performance, such as sensory applications, vision, speech, and robotics applications.