Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware

Nature Machine Intelligence(2022)

Cited 25|Views174
No score
Abstract
Deep learning could be less energy intensive when implemented on spike-based neuromorphic chips. An approach inspired by a characteristic feature of biological neurons, the presence of slowly changing internal currents, is developed to emulate long short-term memory units in a sparse spiking regime for neuromorphic implementation. Spike-based neuromorphic hardware holds promise for more energy-efficient implementations of deep neural networks (DNNs) than standard hardware such as GPUs. But this requires us to understand how DNNs can be emulated in an event-based sparse firing regime, as otherwise the energy advantage is lost. In particular, DNNs that solve sequence processing tasks typically employ long short-term memory units that are hard to emulate with few spikes. We show that a facet of many biological neurons, slow after-hyperpolarizing currents after each spike, provides an efficient solution. After-hyperpolarizing currents can easily be implemented in neuromorphic hardware that supports multi-compartment neuron models, such as Intel's Loihi chip. Filter approximation theory explains why after-hyperpolarizing neurons can emulate the function of long short-term memory units. This yields a highly energy-efficient approach to time-series classification. Furthermore, it provides the basis for an energy-efficient implementation of an important class of large DNNs that extract relations between words and sentences in order to answer questions about the text.
More
Translated text
Key words
Spiking Neurons,Neuromorphic Computing,Memory Applications,Brain-inspired Computing,Neuromorphic Photonics
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined