Next generation reservoir computing.

Daniel J Gauthier, Erik Bollt, Aaron Griffith, Wendson A S Barbosa
Author Information
  1. Daniel J Gauthier: The Ohio State University, Department of Physics, 191 West Woodruff Ave., Columbus, OH, 43210, USA. gauthier.51@osu.edu. ORCID
  2. Erik Bollt: Clarkson University, Department of Electrical and Computer Engineering, Potsdam, NY, 13669, USA.
  3. Aaron Griffith: The Ohio State University, Department of Physics, 191 West Woodruff Ave., Columbus, OH, 43210, USA. ORCID
  4. Wendson A S Barbosa: The Ohio State University, Department of Physics, 191 West Woodruff Ave., Columbus, OH, 43210, USA. ORCID

Abstract

Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.

References

  1. Neural Comput. 2006 Dec;18(12):3097-118 [PMID: 17052160]
  2. IEEE Trans Neural Netw Learn Syst. 2018 Mar;29(3):706-717 [PMID: 28092580]
  3. Chaos. 2020 Nov;30(11):113118 [PMID: 33261353]
  4. Neural Comput. 2002 Nov;14(11):2531-60 [PMID: 12433288]
  5. Chaos. 2020 May;30(5):053111 [PMID: 32491877]
  6. Philos Trans A Math Phys Eng Sci. 2021 Apr 5;379(2194):20200246 [PMID: 33583272]
  7. Chaos. 2018 Jun;28(6):061104 [PMID: 29960382]
  8. Chaos. 2020 Jan;30(1):013107 [PMID: 32013491]
  9. Chaos. 2021 Dec;31(12):123118 [PMID: 34972341]
  10. Chaos. 2021 Jan;31(1):013108 [PMID: 33754755]
  11. Chaos. 2021 May;31(5):053114 [PMID: 34240950]
  12. Proc Natl Acad Sci U S A. 2016 Apr 12;113(15):3932-7 [PMID: 27035946]
  13. Chaos. 2017 Dec;27(12):121102 [PMID: 29289043]
  14. Chaos. 2019 Dec;29(12):123108 [PMID: 31893676]
  15. Chaos. 2021 Aug;31(8):082101 [PMID: 34470223]
  16. IEEE Trans Neural Netw Learn Syst. 2020 Jan;31(1):100-112 [PMID: 30892244]
  17. Phys Rev Lett. 2018 Jan 12;120(2):024102 [PMID: 29376715]
  18. Neural Netw. 2020 Jun;126:191-217 [PMID: 32248008]
  19. Science. 2004 Apr 2;304(5667):78-80 [PMID: 15064413]
  20. Neural Netw. 2019 Jul;115:23-29 [PMID: 30921562]
  21. Chaos. 1998 Dec;8(4):782-790 [PMID: 12779784]

Word Cloud

Created with Highcharts 10.0.0computingrequiresreservoirdatatrainingalgorithmsetsusesmatricesmetaparametersresultsdemonstratenonlinearvectorautoregressiongenerationReservoirbest-in-classmachinelearningprocessinginformationgenerateddynamicalsystemsusingobservedtime-seriesImportantlysmalllinearoptimizationthusminimalresourcesHoweverrandomlysampleddefineunderlyingrecurrentneuralnetworkmultitudemustoptimizedRecentequivalencerandomfewerprovidesinterpretableexcelsbenchmarktasksevenshortertimeheraldingnextNext

Similar Articles

Cited By