Fusion of multi-scale bag of deep visual words features of chest X-ray images to detect COVID-19 infection.

Chiranjibi Sitaula, Tej Bahadur Shahi, Sunil Aryal, Faezeh Marzbanrad
Author Information
  1. Chiranjibi Sitaula: Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC, 3800, Australia. Chiranjibi.Sitaula@monash.edu. ORCID
  2. Tej Bahadur Shahi: School of Engineering and Technology, Central Queensland University, Rockhampton, QLD, 4701, Australia. ORCID
  3. Sunil Aryal: School of Information Technology, Deakin University, Waurn Ponds, VIC, 3216, Australia.
  4. Faezeh Marzbanrad: Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC, 3800, Australia. ORCID

Abstract

Chest X-ray (CXR) images have been one of the important diagnosis tools used in the COVID-19 disease diagnosis. Deep learning (DL)-based methods have been used heavily to analyze these images. Compared to other DL-based methods, the bag of deep visual words-based method (BoDVW) proposed recently is shown to be a prominent representation of CXR images for their better discriminability. However, single-scale BoDVW features are insufficient to capture the detailed semantic information of the infected regions in the lungs as the resolution of such images varies in real application. In this paper, we propose a new multi-scale bag of deep visual words (MBoDVW) features, which exploits three different scales of the 4th pooling layer's output feature map achieved from VGG-16 model. For MBoDVW-based features, we perform the Convolution with Max pooling operation over the 4th pooling layer using three different kernels: [Formula: see text], [Formula: see text], and [Formula: see text]. We evaluate our proposed features with the Support Vector Machine (SVM) classification algorithm on four CXR public datasets (CD1, CD2, CD3, and CD4) with over 5000 CXR images. Experimental results show that our method produces stable and prominent classification accuracy (84.37%, 88.88%, 90.29%, and 83.65% on CD1, CD2, CD3, and CD4, respectively).

References

  1. Sci Rep. 2021 Mar 23;11(1):6638 [PMID: 33758267]
  2. Comput Methods Programs Biomed. 2020 Nov;196:105581 [PMID: 32534344]
  3. Appl Intell (Dordr). 2020 Nov 17;:1-14 [PMID: 34764568]
  4. Comput Biol Med. 2020 Jun;121:103792 [PMID: 32568675]
  5. Health Inf Sci Syst. 2020 Nov 4;8(1):38 [PMID: 33178434]
  6. SN Comput Sci. 2021;2(1):18 [PMID: 33426530]
  7. Health Inf Sci Syst. 2021 Jun 18;9(1):24 [PMID: 34164119]
  8. SN Comput Sci. 2021;2(5):384 [PMID: 34308367]
  9. IEEE Access. 2021 Feb 10;9:30551-30572 [PMID: 34976571]
  10. Cell. 2018 Feb 22;172(5):1122-1131.e9 [PMID: 29474911]
  11. J Healthc Eng. 2019 Mar 27;2019:4180949 [PMID: 31049186]
  12. Inform Med Unlocked. 2020;20:100412 [PMID: 32835084]
  13. SN Comput Sci. 2020;1(6):320 [PMID: 33063058]
  14. Chaos Solitons Fractals. 2020 Sep;138:109944 [PMID: 32536759]
  15. IEEE Trans Neural Netw. 1999;10(5):1055-64 [PMID: 18252608]
  16. Inform Med Unlocked. 2021;22:100505 [PMID: 33363252]
  17. Int J Antimicrob Agents. 2020 Mar;55(3):105924 [PMID: 32081636]

MeSH Term

Algorithms
COVID-19
Databases, Factual
Deep Learning
Humans
Radiographic Image Interpretation, Computer-Assisted
Support Vector Machine

Word Cloud

Created with Highcharts 10.0.0imagesfeaturesCXRbagdeepvisualpooling[Formula:seetext]X-raydiagnosisusedCOVID-19methodsmethodBoDVWproposedprominentmulti-scalewordsthreedifferent4thclassificationCD1CD2CD3CD4ChestoneimportanttoolsdiseaseDeeplearningDL-basedheavilyanalyzeComparedDL-basedwords-basedrecentlyshownrepresentationbetterdiscriminabilityHoweversingle-scaleinsufficientcapturedetailedsemanticinformationinfectedregionslungsresolutionvariesrealapplicationpaperproposenewMBoDVWexploitsscaleslayer'soutputfeaturemapachievedVGG-16modelMBoDVW-basedperformConvolutionMaxoperationlayerusingkernels:evaluateSupportVectorMachineSVMalgorithmfourpublicdatasets5000Experimentalresultsshowproducesstableaccuracy8437%8888%9029%8365%respectivelyFusionchestdetectinfection

Similar Articles

Cited By