A high throughput generative vector autoregression model for stochastic synapses.

Tyler Hennen, Alexander Elias, Jean-François Nodin, Gabriel Molas, Rainer Waser, Dirk J Wouters, Daniel Bedau
Author Information
  1. Tyler Hennen: Institut für Werkstoffe der Elektrotechnik 2 (IWE II), RWTH Aachen University, Aachen, Germany.
  2. Alexander Elias: Western Digital San Jose Research Center, San Jose, CA, United States.
  3. Jean-François Nodin: CEA, LETI, Grenoble, France.
  4. Gabriel Molas: CEA, LETI, Grenoble, France.
  5. Rainer Waser: Institut für Werkstoffe der Elektrotechnik 2 (IWE II), RWTH Aachen University, Aachen, Germany.
  6. Dirk J Wouters: Institut für Werkstoffe der Elektrotechnik 2 (IWE II), RWTH Aachen University, Aachen, Germany.
  7. Daniel Bedau: Western Digital San Jose Research Center, San Jose, CA, United States.

Abstract

By imitating the synaptic connectivity and plasticity of the brain, emerging electronic nanodevices offer new opportunities as the building blocks of neuromorphic systems. One challenge for large-scale simulations of computational architectures based on emerging devices is to accurately capture device response, hysteresis, noise, and the covariance structure in the temporal domain as well as between the different device parameters. We address this challenge with a high throughput generative model for synaptic arrays that is based on a recently available type of electrical measurement data for resistive memory cells. We map this real-world data onto a vector autoregressive stochastic process to accurately reproduce the device parameters and their cross-correlation structure. While closely matching the measured data, our model is still very fast; we provide parallelized implementations for both CPUs and GPUs and demonstrate array sizes above one billion cells and throughputs exceeding one hundred million weight updates per second, above the pixel rate of a 30 frames/s 4K video stream.

Keywords

References

  1. Nanotechnology. 2013 Sep 27;24(38):384009 [PMID: 23999317]
  2. ACS Appl Mater Interfaces. 2017 May 17;9(19):16296-16304 [PMID: 28436217]
  3. ACS Appl Mater Interfaces. 2021 Dec 8;13(48):58066-58075 [PMID: 34808060]
  4. Materials (Basel). 2020 Jan 01;13(1): [PMID: 31906325]
  5. Nat Nanotechnol. 2020 Jul;15(7):517-528 [PMID: 32123381]
  6. Sci Rep. 2018 Feb 8;8(1):2638 [PMID: 29422641]
  7. Rev Sci Instrum. 2021 May 1;92(5):054701 [PMID: 34243265]
  8. J Comput Electron. 2017;16(4):1121-1143 [PMID: 31997981]
  9. Adv Mater. 2009 Jul 13;21(25-26):2632-2663 [PMID: 36751064]
  10. Nanoscale. 2014 Jun 7;6(11):5698-702 [PMID: 24769626]
  11. Sci Rep. 2016 Feb 02;6:20085 [PMID: 26830763]
  12. Nat Nanotechnol. 2008 Jul;3(7):429-33 [PMID: 18654568]
  13. Front Neurosci. 2019 Apr 24;13:357 [PMID: 31110470]
  14. Nanoscale. 2016 Oct 20;8(41):17774-17781 [PMID: 27523172]

Word Cloud

Created with Highcharts 10.0.0modelemergingdevicedatastochasticsynapticneuromorphicchallengebasedaccuratelystructureparametershighthroughputgenerativecellsvectoroneimitatingconnectivityplasticitybrainelectronicnanodevicesoffernewopportunitiesbuildingblockssystemsOnelarge-scalesimulationscomputationalarchitecturesdevicescaptureresponsehysteresisnoisecovariancetemporaldomainwelldifferentaddressarraysrecentlyavailabletypeelectricalmeasurementresistivememorymapreal-worldontoautoregressiveprocessreproducecross-correlationcloselymatchingmeasuredstillfastprovideparallelizedimplementationsCPUsGPUsdemonstratearraysizesbillionthroughputsexceedinghundredmillionweightupdatespersecondpixelrate30frames/s4KvideostreamautoregressionsynapsesReRAMtechnologiesmachinelearningnanotechnologyneuralnetworkscomputingtimeseries

Similar Articles

Cited By