Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.

Alexander Kugele, Thomas Pfeil, Michael Pfeiffer, Elisabetta Chicca
Author Information
  1. Alexander Kugele: Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.
  2. Thomas Pfeil: Bosch Center for Artificial Intelligence, Renningen, Germany.
  3. Michael Pfeiffer: Bosch Center for Artificial Intelligence, Renningen, Germany.
  4. Elisabetta Chicca: Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany.

Abstract

Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.

Keywords

References

  1. Front Neurosci. 2017 Dec 07;11:682 [PMID: 29375284]
  2. Front Neurosci. 2017 May 30;11:309 [PMID: 28611582]
  3. Front Neurosci. 2016 Nov 08;10:508 [PMID: 27877107]
  4. IEEE Trans Pattern Anal Mach Intell. 2013 Nov;35(11):2706-19 [PMID: 24051730]
  5. Curr Opin Neurobiol. 2010 Jun;20(3):288-95 [PMID: 20493680]
  6. Sci Rep. 2017 Jan 12;7:40703 [PMID: 28079187]
  7. Front Neurosci. 2018 May 23;12:331 [PMID: 29875621]
  8. Front Neurosci. 2018 Dec 03;12:891 [PMID: 30559644]
  9. Front Neurosci. 2015 Apr 29;9:141 [PMID: 25972778]
  10. Front Neurosci. 2018 Oct 25;12:774 [PMID: 30410432]
  11. IEEE Trans Med Imaging. 2018 Jun;37(6):1407-1417 [PMID: 29870369]
  12. Front Neurosci. 2015 Nov 16;9:437 [PMID: 26635513]
  13. Front Neurosci. 2017 Jan 04;10:594 [PMID: 28101001]
  14. Neural Comput. 2016 Oct;28(10):2011-44 [PMID: 27557100]
  15. Neural Netw. 2020 Jan;121:294-307 [PMID: 31586857]
  16. Front Neurosci. 2013 Oct 08;7:178 [PMID: 24115919]
  17. Front Neurosci. 2012 Apr 10;6:32 [PMID: 22518097]
  18. Science. 2014 Aug 8;345(6197):668-73 [PMID: 25104385]
  19. IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):154-180 [PMID: 32750812]

Word Cloud

Created with Highcharts 10.0.0SNNsANNstimenetworkssequenceconnectionsneuralconversionprocessingdelaysSpikinghighlyefficientinferencefullyparallelneuromorphictrainingstate-of-the-artaccuracytaskspropagationdelaydifferentstepmethodANNrolloutsinputnetworkevent-basedpotentiallymodelshardwareexistingmethodsconvertconventionalartificialunableexploitadvantagesAlthoughANN-to-SNNachievedstaticimageclassificationfollowingsubtleimportantdifferencewayintegrateinformationmakesdirectapplicationtechniqueschallengingWhereascertainlargerzeroassignrolesfeed-forwardimmediatelyupdateneuronswithinrecurrentrolledtypicallyassignedonepresentnovelobtainaccuratemodifyinginducedmatchtargetedSNNimplementationbuildsrecentlyintroducedframeworkstreamingaimsmodelexecutioninherentlyallowstemporalintegrationmergingpathsoutputresultingachievemultiplebenchmarkdatasetsincludingN-MNISTCIFAR10-DVSN-CARSDvsGestureusespatio-temporalshortcutyieldlow-latencyapproximateresponsesimproveprocessedadditionconvertedconsistentlyenergy-efficientcorrespondingEfficientProcessingSpatio-TemporalDataStreamsNeuralNetworksvisioncomputingspiking

Similar Articles

Cited By