Hardware-Efficient Stochastic Binary CNN Architectures for Near-Sensor Computing.

Vivek Parmar, Bogdan Penkovsky, Damien Querlioz, Manan Suri
Author Information
  1. Vivek Parmar: Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, India.
  2. Bogdan Penkovsky: Centre de Nanosciences et de Nanotechnologies, Université Paris-Saclay, CNRS, Palaiseau, France.
  3. Damien Querlioz: Centre de Nanosciences et de Nanotechnologies, Université Paris-Saclay, CNRS, Palaiseau, France.
  4. Manan Suri: Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, India.

Abstract

With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible at the edge. Such networks have the first layer implemented with high precision, which poses a challenge in deploying a uniform hardware mapping for the network implementation. Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy. In this work, we propose a fully binarized hardware-friendly computation engine based on stochastic computing as a proof of concept for vision applications involving multi-channel inputs. Stochastic sampling is performed by sampling from a non-uniform (normal) distribution based on analog hardware sources. We first validate the benefits of the proposed pipeline on the CIFAR-10 dataset. To further demonstrate its application for real-world scenarios, we present a case-study of microscopy image diagnostics for pathogen detection. We then evaluate benefits of implementing such a pipeline using OxRAM-based circuits for stochastic sampling as well as in-memory computing-based binarized multiplication. The proposed implementation is about 1,000 times more energy efficient compared to conventional floating-precision-based digital implementations, with memory savings of a factor of 45.

Keywords

References

  1. Opt Lett. 2019 Nov 15;44(22):5566-5569 [PMID: 31730110]
  2. IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2790-2796 [PMID: 32701452]
  3. Front Neurosci. 2020 Jan 09;13:1383 [PMID: 31998059]
  4. Entropy (Basel). 2020 Jun 02;22(6): [PMID: 33286390]
  5. IEEE J Biomed Health Inform. 2020 May;24(5):1427-1438 [PMID: 31545747]
  6. Caspian J Intern Med. 2013 Spring;4(2):627-35 [PMID: 24009950]
  7. Nat Commun. 2017 Oct 12;8(1):882 [PMID: 29026110]

Word Cloud

Created with Highcharts 10.0.0binarizedcomputingStochasticstochasticsamplingneuralnetworksvisionapplicationsimplementationsfirsthardwarenetworkimplementationbasedbenefitsproposedpipelinein-memoryrecentadvancesfieldartificialintelligenceAIBNNswidevarietyenergy-optimizedbecomepossibleedgelayerimplementedhighprecisionposeschallengedeployinguniformmappingcanallowconversionhigh-precisioncomputationssequenceoperationsmaintainingequivalentaccuracyworkproposefullyhardware-friendlycomputationengineproofconceptinvolvingmulti-channelinputsperformednon-uniformnormaldistributionanalogsourcesvalidateCIFAR-10datasetdemonstrateapplicationreal-worldscenariospresentcase-studymicroscopyimagediagnosticspathogendetectionevaluateimplementingusingOxRAM-basedcircuitswellcomputing-basedmultiplication1000timesenergyefficientcomparedconventionalfloating-precision-baseddigitalmemorysavingsfactor45Hardware-EfficientBinaryCNNArchitecturesNear-SensorComputingRRAMresistiveRAMBNNIMCnear-sensorSC

Similar Articles

Cited By