Optimal nonlinear information processing capacity in delay-based reservoir computers.

Lyudmila Grigoryeva, Julie Henriques, Laurent Larger, Juan-Pablo Ortega
Author Information
  1. Lyudmila Grigoryeva: Laboratoire de Mathématiques de Besançon, UMR CNRS 6623, Université de Franche-Comté, UFR des Sciences et Techniques. 16, route de Gray. F-25030 Besançon cedex. France.
  2. Julie Henriques: Laboratoire de Mathématiques de Besançon, UMR CNRS 6623, Université de Franche-Comté, UFR des Sciences et Techniques. 16, route de Gray. F-25030 Besançon cedex. France.
  3. Laurent Larger: FEMTO-ST, UMR CNRS 6174, Optics Department, Université de Franche-Comté, UFR des Sciences et Techniques. 15, Avenue des Montboucons. F-25000 Besançon cedex. France.
  4. Juan-Pablo Ortega: Centre National de la Recherche Scientifique, Laboratoire de Mathématiques de Besançon, UMR CNRS 6623, Université de Franche-Comté, UFR des Sciences et Techniques. 16, route de Gray. F-25030 Besançon cedex. France.

Abstract

Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

References

  1. Neural Netw. 2007 Apr;20(3):391-403 [PMID: 17517492]
  2. Nat Commun. 2013;4:1364 [PMID: 23322052]
  3. Phys Rev Lett. 2004 Apr 9;92(14):148102 [PMID: 15089576]
  4. Neural Netw. 2007 Apr;20(3):335-52 [PMID: 17517495]
  5. Neural Comput. 2010 May;22(5):1272-311 [PMID: 20028227]
  6. Nat Commun. 2011 Sep 13;2:468 [PMID: 21915110]
  7. Science. 2004 Apr 2;304(5667):78-80 [PMID: 15064413]
  8. IEEE Trans Neural Netw. 2000;11(3):697-709 [PMID: 18249797]
  9. Phys Rev E Stat Nonlin Soft Matter Phys. 2013 Apr;87(4):042808 [PMID: 23679474]
  10. IEEE Trans Neural Netw. 2011 Jan;22(1):131-44 [PMID: 21075721]
  11. Proc Natl Acad Sci U S A. 2008 Dec 2;105(48):18970-5 [PMID: 19020074]
  12. Sci Rep. 2012;2:514 [PMID: 22816038]
  13. Opt Express. 2012 Jan 30;20(3):3241-9 [PMID: 22330562]
  14. Neural Netw. 2010 Apr;23(3):341-55 [PMID: 19748225]
  15. Sci Rep. 2012;2:287 [PMID: 22371825]
  16. Neural Netw. 2014 Jul;55:59-71 [PMID: 24732236]
  17. Neural Comput. 2002 Nov;14(11):2531-60 [PMID: 12433288]
  18. Chaos. 2010 Sep;20(3):037101 [PMID: 20887067]
  19. Science. 1977 Jul 15;197(4300):287-9 [PMID: 267326]

Word Cloud

Created with Highcharts 10.0.0reservoirprocessinginformationReservoircomputingdatacomputersschemeperformancearchitectureparametersoptimalusedrecentlyintroducedbrain-inspiredmachinelearningparadigmcapableexcellentperformancesempiricalfocusparticularkindtime-delaybasedphysicallyimplementedusingopticalelectronicsystemsshownunprecedentedrateswell-knowneaseassociatedtrainingalsoproblematicsensitivityarticleaddressesdesignproblemremainsbiggestchallengeapplicabilityspecificallyuseavailableregardingworkingregimesconstructfunctionallinkfunctionexplorevariouspropertiesdevicechoosethusreplacingtedioustimeconsumingparameterscanningsfarliteratureOptimalnonlinearcapacitydelay-based

Similar Articles

Cited By (2)