PyGeNN: A Python Library for GPU-Enhanced Neural Networks.

James C Knight, Anton Komissarov, Thomas Nowotny
Author Information
  1. James C Knight: Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom.
  2. Anton Komissarov: Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.
  3. Thomas Nowotny: Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom.

Abstract

More than half of the Top 10 supercomputing sites worldwide use GPU accelerators and they are becoming ubiquitous in workstations and edge computing devices. GeNN is a C++ library for generating efficient spiking neural network simulation code for GPUs. However, until now, the full flexibility of GeNN could only be harnessed by writing model descriptions and simulation code in C++. Here we present PyGeNN, a Python package which exposes all of GeNN's functionality to Python with minimal overhead. This provides an alternative, arguably more user-friendly, way of using GeNN and allows modelers to use GeNN within the growing Python-based machine learning and computational neuroscience ecosystems. In addition, we demonstrate that, in both Python and C++ GeNN simulations, the overheads of recording spiking data can strongly affect runtimes and show how a new spike recording system can reduce these overheads by up to 10×. Using the new recording system, we demonstrate that by using PyGeNN on a modern GPU, we can simulate a full-scale model of a cortical column faster even than real-time neuromorphic systems. Finally, we show that long simulations of a smaller model with complex stimuli and a custom three-factor learning rule defined in PyGeNN can be simulated almost two orders of magnitude faster than real-time.

Keywords

References

  1. Front Neurosci. 2018 Feb 27;12:105 [PMID: 29535600]
  2. Sci Rep. 2016 Jan 07;6:18854 [PMID: 26740369]
  3. J Neurosci Methods. 1998 Jun 1;81(1-2):159-67 [PMID: 9696321]
  4. Nat Comput Sci. 2021 Feb;1(2):136-142 [PMID: 38217218]
  5. Front Neuroinform. 2009 Jan 28;3:1 [PMID: 19198661]
  6. Sci Rep. 2020 Jan 15;10(1):410 [PMID: 31941893]
  7. Front Neuroinform. 2009 Jan 29;2:12 [PMID: 19198667]
  8. Front Neurosci. 2018 Dec 12;12:941 [PMID: 30618570]
  9. Neural Comput. 2007 Dec;19(12):3216-25 [PMID: 17970650]
  10. PLoS One. 2016 Jan 11;11(1):e0146581 [PMID: 26751378]
  11. Nat Rev Neurosci. 2014 Apr;15(4):264-78 [PMID: 24569488]
  12. Philos Trans A Math Phys Eng Sci. 2020 Feb 7;378(2164):20190160 [PMID: 31865885]
  13. IEEE Trans Neural Netw. 2003;14(6):1569-72 [PMID: 18244602]
  14. Front Neuroinform. 2015 Jul 31;9:19 [PMID: 26283957]
  15. Front Neuroinform. 2009 Jan 27;2:11 [PMID: 19194529]
  16. Front Comput Neurosci. 2021 Feb 17;15:627620 [PMID: 33679358]
  17. Front Neuroinform. 2018 Aug 03;12:46 [PMID: 30123121]
  18. Neural Comput. 2015 Oct;27(10):2148-82 [PMID: 26313605]
  19. Elife. 2019 Aug 20;8: [PMID: 31429824]
  20. Cereb Cortex. 2007 Oct;17(10):2443-52 [PMID: 17220510]
  21. Cereb Cortex. 2014 Mar;24(3):785-806 [PMID: 23203991]

Word Cloud

Created with Highcharts 10.0.0GeNNPythoncanGPUcomputingC++spikingmodelPyGeNNrecordinguseneuralsimulationcodeusinglearningcomputationalneurosciencedemonstratesimulationsoverheadsshownewsystemfasterreal-timehalfTop10supercomputingsitesworldwideacceleratorsbecomingubiquitousworkstationsedgedeviceslibrarygeneratingefficientnetworkGPUsHowevernowfullflexibilityharnessedwritingdescriptionspresentpackageexposesGeNN'sfunctionalityminimaloverheadprovidesalternativearguablyuser-friendlywayallowsmodelerswithingrowingPython-basedmachineecosystemsadditiondatastronglyaffectruntimesspikereduce10×Usingmodernsimulatefull-scalecorticalcolumnevenneuromorphicsystemsFinallylongsmallercomplexstimulicustomthree-factorruledefinedsimulatedalmosttwoordersmagnitudePyGeNN:LibraryGPU-EnhancedNeuralNetworksbenchmarkinghigh-performanceparallelpythonnetworks

Similar Articles

Cited By