Using super-resolution generative adversarial network models and transfer learning to obtain high resolution digital periapical radiographs.

Maira B H Moran, Marcelo D B Faria, Gilson A Giraldi, Luciana F Bastos, Aura Conci
Author Information
  1. Maira B H Moran: Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil; Instituto de Computação, Universidade Federal Fluminense, 24210-310, Niterói, Brazil. Electronic address: mhernandez@id.uff.br.
  2. Marcelo D B Faria: Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil; Faculdade de Odontologia, Universidade Federal Do Rio de Janeiro, 21941-617, Rio de Janeiro, Brazil.
  3. Gilson A Giraldi: Laboratório Nacional de Computação Científica, 25651-076, Petrópolis, Brazil.
  4. Luciana F Bastos: Policlínica Piquet Carneiro, Universidade Do Estado Do Rio de Janeiro, 20950-003, Rio de Janeiro, Brazil.
  5. Aura Conci: Instituto de Computação, Universidade Federal Fluminense, 24210-310, Niterói, Brazil.

Abstract

Periapical Radiographs are commonly used to detect several anomalies, like caries, periodontal, and periapical diseases. Even considering that digital imaging systems used nowadays tend to provide high-quality images, external factors, or even system limitations can result in a vast amount of radiographic images with low quality and resolution. Commercial solutions offer tools based on interpolation methods to increase image resolution. However, previous literature shows that these methods may create undesirable effects in the images affecting the diagnosis accuracy. One alternative is using deep learning-based super-resolution methods to achieve better high-resolution images. Nevertheless, the amount of data for training such models is limited, demanding transfer learning approaches. In this work, we propose the use of super-resolution generative adversarial network (SRGAN) models and transfer learning to achieve periapical images with higher quality and resolution. Moreover, we evaluate the influence of using the transfer learning approach and the datasets selected for it in the final generated images. For that, we performed an experiment comparing the performance of the SRGAN models (with and without transfer learning) with other super-resolution methods. Considering Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Opinion Score (MOS), the results of SRGAN models using transfer learning were better on average. This superiority was also verified statistically using the Wilcoxon paired test. In the visual analysis, the high quality achieved by the SRGAN models, in general, is visible, resulting in more defined edges details and fewer blur effects.

Keywords

MeSH Term

Image Processing, Computer-Assisted
Machine Learning
Signal-To-Noise Ratio

Word Cloud

Created with Highcharts 10.0.0learningimagesmodelstransferresolutionmethodsusingsuper-resolutionSRGANperiapicalqualityadversarialPeriapicaluseddigitalamounteffectsachievebettergenerativenetworkMeanhighRadiographscommonlydetectseveralanomalieslikecariesperiodontaldiseasesEvenconsideringimagingsystemsnowadaystendprovidehigh-qualityexternalfactorsevensystemlimitationscanresultvastradiographiclowCommercialsolutionsoffertoolsbasedinterpolationincreaseimageHoweverpreviousliteratureshowsmaycreateundesirableaffectingdiagnosisaccuracyOnealternativedeeplearning-basedhigh-resolutionNeverthelessdatatraininglimiteddemandingapproachesworkproposeusehigherMoreoverevaluateinfluenceapproachdatasetsselectedfinalgeneratedperformedexperimentcomparingperformancewithoutConsideringSquareErrorMSEPeakSignalNoiseRatioPSNRStructuralSimilarityIndexSSIMOpinionScoreMOSresultsaveragesuperiorityalsoverifiedstatisticallyWilcoxonpairedtestvisualanalysisachievedgeneralvisibleresultingdefinededgesdetailsfewerblurUsingobtainradiographsGenerativenetworksImageenhancementradiographySuper-resolutionTransfer

Similar Articles

Cited By