Effect of temporal resolution on the reproduction of chaotic dynamics via reservoir computing.

Kohei Tsuchiyama, André Röhm, Takatomo Mihana, Ryoichi Horisaki, Makoto Naruse
Author Information
  1. Kohei Tsuchiyama: Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. ORCID
  2. André Röhm: Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. ORCID
  3. Takatomo Mihana: Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. ORCID
  4. Ryoichi Horisaki: Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. ORCID
  5. Makoto Naruse: Department of Information Physics and Computing, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan. ORCID

Abstract

Reservoir computing is a machine learning paradigm that uses a structure called a reservoir, which has nonlinearities and short-term memory. In recent years, reservoir computing has expanded to new functions such as the autonomous generation of chaotic time series, as well as time series prediction and classification. Furthermore, novel possibilities have been demonstrated, such as inferring the existence of previously unseen attractors. Sampling, in contrast, has a strong influence on such functions. Sampling is indispensable in a physical reservoir computer that uses an existing physical system as a reservoir because the use of an external digital system for the data input is usually inevitable. This study analyzes the effect of sampling on the ability of reservoir computing to autonomously regenerate chaotic time series. We found, as expected, that excessively coarse sampling degrades the system performance, but also that excessively dense sampling is unsuitable. Based on quantitative indicators that capture the local and global characteristics of attractors, we identify a suitable window of the sampling frequency and discuss its underlying mechanisms.

MeSH Term

Neural Networks, Computer
Machine Learning
Time Factors
Reproduction

Word Cloud

Created with Highcharts 10.0.0reservoircomputingsamplingchaotictimeseriessystemusesfunctionsattractorsSamplingphysicalexcessivelyReservoirmachinelearningparadigmstructurecallednonlinearitiesshort-termmemoryrecentyearsexpandednewautonomousgenerationwellpredictionclassificationFurthermorenovelpossibilitiesdemonstratedinferringexistencepreviouslyunseencontraststronginfluenceindispensablecomputerexistinguseexternaldigitaldatainputusuallyinevitablestudyanalyzeseffectabilityautonomouslyregeneratefoundexpectedcoarsedegradesperformancealsodenseunsuitableBasedquantitativeindicatorscapturelocalglobalcharacteristicsidentifysuitablewindowfrequencydiscussunderlyingmechanismsEffecttemporalresolutionreproductiondynamicsvia

Similar Articles

Cited By