An Energy Efficient Stochastic+Spiking Neural Network

Takeshi Nomura, Renyuan Zhang, Yasuhiko Nakashima


A feasibility study of stochastic+spiking neural net- work is presented for reducing the hardware implementation cost. By using a set of time-based stochastic computing (TBSC) circuits, the stochastic numbers (SNs) in continuous time-domain are directly fed into the input layer of the spiking neural networks (SNNs) without any additional spike-coding mechanism. The analog circuits behaving as synapses and neurons are designed to fit the TBSC coding and generate spikes for the rest of layers. The transistor counting is compact as 22 per synapse and 22 per neuron. Several real-world tasks based on pattern recognition data-set including MNIST are verified and estimated. For proof-of-concept, a 0.18um CMOS technology is used to design and simulate our proposed SNN. Implementing the exampled pattern recognition tasks, the recognition accuracy loss is below 4% compared to well-trained artificial neural networks (ANNs). The average firing energy is 0.94pJ per spike, which is 0.5x of state-of-art of low power SNN implementations. The energy consumption of MNIST is estimated as 0.88uJ per classification.


Spiking neural network; time-based stochastic computing; energy efficient

Full Text:



  • There are currently no refbacks.