Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing.

Yanan Zhong, Jianshi Tang, Xinyi Li, Bin Gao, He Qian, Huaqiang Wu
Author Information
  1. Yanan Zhong: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China. ORCID
  2. Jianshi Tang: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China. jtang@tsinghua.edu.cn. ORCID
  3. Xinyi Li: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China.
  4. Bin Gao: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China. ORCID
  5. He Qian: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China.
  6. Huaqiang Wu: Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, 100084, Beijing, China. wuhq@tsinghua.edu.cn. ORCID

Abstract

Reservoir computing is a highly efficient network for processing temporal signals due to its low training cost compared to standard recurrent neural networks, and generating rich reservoir states is critical in the hardware implementation. In this work, we report a parallel dynamic memristor-based reservoir computing system by applying a controllable mask process, in which the critical parameters, including state richness, feedback strength and input scaling, can be tuned by changing the mask length and the range of input signal. Our system achieves a low word error rate of 0.4% in the spoken-digit recognition and low normalized root mean square error of 0.046 in the time-series prediction of the Hénon map, which outperforms most existing hardware-based reservoir computing systems and also software-based one in the Hénon map prediction task. Our work could pave the road towards high-efficiency memristor-based reservoir computing systems to handle more complex temporal tasks in the future.

References

  1. IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149 [PMID: 27295650]
  2. Nat Commun. 2017 Dec 19;8(1):2204 [PMID: 29259188]
  3. IEEE Trans Electron Devices. 2017;IEDM 2017: [PMID: 31080272]
  4. Nat Commun. 2017 May 12;8:15199 [PMID: 28497781]
  5. Nat Nanotechnol. 2020 Sep;15(9):776-782 [PMID: 32601451]
  6. Sci Rep. 2012;2:287 [PMID: 22371825]
  7. Nature. 2017 Jul 26;547(7664):428-431 [PMID: 28748930]
  8. ACS Nano. 2011 Sep 27;5(9):7669-76 [PMID: 21861506]
  9. Neural Netw. 2019 Jul;115:100-123 [PMID: 30981085]
  10. Phys Rev Lett. 2012 Jun 15;108(24):244101 [PMID: 23004274]
  11. Phys Rev Lett. 2018 Jan 12;120(2):024102 [PMID: 29376715]
  12. Science. 2004 Apr 2;304(5667):78-80 [PMID: 15064413]
  13. Adv Mater. 2019 Dec;31(49):e1902761 [PMID: 31550405]
  14. Nat Mater. 2017 Jan;16(1):101-108 [PMID: 27669052]
  15. Nat Commun. 2011 Sep 13;2:468 [PMID: 21915110]
  16. Nature. 2020 Jan;577(7792):641-646 [PMID: 31996818]
  17. Nature. 2015 May 28;521(7553):436-44 [PMID: 26017442]
  18. Nanotechnology. 2016 Jul 29;27(30):305201 [PMID: 27302281]
  19. Nat Commun. 2014 Mar 24;5:3541 [PMID: 24662967]
  20. Neural Comput. 1997 Nov 15;9(8):1735-80 [PMID: 9377276]
  21. Neural Comput. 2002 Nov;14(11):2531-60 [PMID: 12433288]
  22. PLoS One. 2016 Jun 07;11(6):e0155781 [PMID: 27271802]
  23. IEEE Trans Neural Netw. 2011 Jan;22(1):131-44 [PMID: 21075721]
  24. Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554-8 [PMID: 6953413]
  25. Nature. 2008 May 1;453(7191):80-3 [PMID: 18451858]
  26. Adv Mater. 2018 Mar;30(9): [PMID: 29318659]
  27. IEEE Trans Neural Netw Learn Syst. 2016 Aug 26;28(11):2686-2698 [PMID: 28113606]
  28. Nat Nanotechnol. 2013 Jan;8(1):13-24 [PMID: 23269430]

Word Cloud

Created with Highcharts 10.0.0computingreservoirtemporallowmemristor-basedprocessingcriticalworksystemmaskinputsignalerror0predictionHénonmapsystemshigh-efficiencyReservoirhighlyefficientnetworksignalsduetrainingcostcomparedstandardrecurrentneuralnetworksgeneratingrichstateshardwareimplementationreportparalleldynamicapplyingcontrollableprocessparametersincludingstaterichnessfeedbackstrengthscalingcantunedchanginglengthrangeachieveswordrate4%spoken-digitrecognitionnormalizedrootmeansquare046time-seriesoutperformsexistinghardware-basedalsosoftware-basedonetaskpaveroadtowardshandlecomplextasksfutureDynamic

Similar Articles

Cited By (54)