Automatic mandibular canal detection using a deep convolutional neural network.

Gloria Hyunjung Kwak, Eun-Jung Kwak, Jae Min Song, Hae Ryoun Park, Yun-Hoa Jung, Bong-Hae Cho, Pan Hui, Jae Joon Hwang
Author Information
  1. Gloria Hyunjung Kwak: Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Pokfulam, Hong Kong. ORCID
  2. Eun-Jung Kwak: National Dental Care Center for Persons with Special Needs, Seoul National University Dental Hospital, Seoul, Korea. ORCID
  3. Jae Min Song: Department of oral and maxillofacial surgery, school of dentistry, Pusan National University, Pusan, Korea. ORCID
  4. Hae Ryoun Park: Department of Oral Pathology & BK21 PLUS Project, School of Dentistry, Pusan National University, Yangsan, Korea.
  5. Yun-Hoa Jung: Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental and Life Science Institute, Yangsan, Korea.
  6. Bong-Hae Cho: Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental and Life Science Institute, Yangsan, Korea.
  7. Pan Hui: Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Pokfulam, Hong Kong.
  8. Jae Joon Hwang: Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental and Life Science Institute, Yangsan, Korea. softdent@pusan.ac.kr. ORCID

Abstract

The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients' discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.

References

  1. Ghatak, R. N. & Anatomy, G. J. Head and Neck, Mandibular Nerve. (2018).
  2. Phillips, C. & Essick, G. Inferior alveolar nerve injury following orthognathic surgery: a review of assessment issues. Journal of oral rehabilitation 38, 547–554, https://doi.org/10.1111/j.1365-2842.2010.02176.x (2011). [DOI: 10.1111/j.1365-2842.2010.02176.x]
  3. Sarikov, R. & Juodzbalys, G. Inferior alveolar nerve injury after mandibular third molar extraction: a literature review. Journal of oral & maxillofacial research 5, e1–e1, https://doi.org/10.5037/jomr.2014.5401 (2014). [DOI: 10.5037/jomr.2014.5401]
  4. Shavit, I. & Juodzbalys, G. Inferior alveolar nerve injuries following implant placement - importance of early diagnosis and treatment: a systematic review. Journal of oral & maxillofacial research 5, e2–e2, https://doi.org/10.5037/jomr.2014.5402 (2014). [DOI: 10.5037/jomr.2014.5402]
  5. Ai, C. J., Jabar, N. A., Lan, T. H. & Ramli, R. Mandibular Canal Enlargement: Clinical and Radiological Characteristics. Journal of clinical imaging science 7, 28–28, https://doi.org/10.4103/jcis.JCIS_28_17 (2017). [DOI: 10.4103/jcis.JCIS_28_17]
  6. Jung, Y.-H. & Cho, B.-H. Radiographic evaluation of the course and visibility of the mandibular canal. Imaging science in dentistry 44, 273–278, https://doi.org/10.5624/isd.2014.44.4.273 (2014). [DOI: 10.5624/isd.2014.44.4.273]
  7. Jaju, P. P. & Jaju, S. P. Clinical utility of dental cone-beam computed tomography: current perspectives. Clinical, cosmetic and investigational dentistry 6, 29–43, https://doi.org/10.2147/CCIDE.S41621 (2014). [DOI: 10.2147/CCIDE.S41621]
  8. Scarfe, W. C. & Farman, A. G. What is Cone-Beam CT and How Does it Work? Dental Clinics of North America 52, 707–730, https://doi.org/10.1016/j.cden.2008.05.005 (2008). [DOI: 10.1016/j.cden.2008.05.005]
  9. Al-Okshi, A., Lindh, C., Salé, H., Gunnarsson, M. & Rohlin, M. Effective dose of cone beam CT (CBCT) of the facial skeleton: a systematic review. The British journal of radiology 88, 20140658–20140658, https://doi.org/10.1259/bjr.20140658 (2015). [DOI: 10.1259/bjr.20140658]
  10. Pauwels, R., Jacobs, R., Singer, S. R. & Mupparapu, M. CBCT-based bone quality assessment: are Hounsfield units applicable? Dento maxillo facial radiology 44, 20140238–20140238, https://doi.org/10.1259/dmfr.20140238 (2015). [DOI: 10.1259/dmfr.20140238]
  11. Hwang, J.-J., Jung, Y.-H., Cho, B.-H. & Heo, M.-S. An overview of deep learning in the field of dentistry. Imaging science in dentistry 49, 1–7, https://doi.org/10.5624/isd.2019.49.1.1 (2019). [DOI: 10.5624/isd.2019.49.1.1]
  12. Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 3320–3328 (MIT Press, Montreal, Canada (2014).
  13. Nishio, M. et al. Computer-aided diagnosis of lung nodule classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep convolutional neural network with transfer learning. PloS one 13, e0200721, https://doi.org/10.1371/journal.pone.0200721 (2018). [DOI: 10.1371/journal.pone.0200721]
  14. Hyun-Jung Kwak, G. & Hui, P. J. a. p. a. DeepHealth: Deep Learning for Health Informatics. (2019).
  15. Shan, H. et al. 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network. IEEE Transactions on Medical Imaging 37, 1522–1534, https://doi.org/10.1109/TMI.2018.2832217 (2018). [DOI: 10.1109/TMI.2018.2832217]
  16. Vinayahalingam, S., Xi, T., Berge, S., Maal, T. & De Jong, G. Automated detection of third molars and mandibular nerve by deep learning. Scientific Reports 9, https://doi.org/10.1038/s41598-019-45487-3 (2019).
  17. Ronneberger, O., Fischer, P. & Brox, T. In International Conference on Medical image computing and computer-assisted intervention. 234–241 (Springer).
  18. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9901, 424–432, https://doi.org/10.1007/978-3-319-46723-8_49 (2016). [DOI: 10.1007/978-3-319-46723-8_49]
  19. Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE transactions on pattern analysis and machine intelligence 39, 2481–2495, https://doi.org/10.1109/tpami.2016.2644615 (2017). [DOI: 10.1109/tpami.2016.2644615]
  20. Simonyan, K. & Zisserman, A. J. a. p. a. Very deep convolutional networks for large-scale image recognition. (2014).
  21. Long, J., Shelhamer, E. & Darrell, T. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3431–3440.
  22. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Very Deep Convolutional Networks for Large-scale Image Recognition (2014).
  23. Eigen, D., Fergus, R., Eigen, D. & Fergus, R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision. 2650–2658 (2015).
  24. Kingma, D. & Ba, J. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (2014).
  25. Moris, B., Claesen, L., Yi, S. & Politis, C. Fourth International Conference on Communications and Electronics (ICCE). 327–332. (2012).
  26. Kim, S. T. et al. Location of the mandibular canal and the topography of its neurovascular structures. The Journal of craniofacial surgery 20, 936–939, https://doi.org/10.1097/SCS.0b013e3181a14c79 (2009). [DOI: 10.1097/SCS.0b013e3181a14c79]
  27. Lee, H. E. & Han, S. J. Anatomical position of the mandibular canal in relation to the buccal cortical bone: relevance to sagittal split osteotomy. Journal of the Korean Association of Oral and Maxillofacial Surgeons 44, 167–173, https://doi.org/10.5125/jkaoms.2018.44.4.167 (2018). [DOI: 10.5125/jkaoms.2018.44.4.167]
  28. Oliveira-Santos, C. et al. Visibility of the mandibular canal on CBCT cross-sectional images. Journal of applied oral science: revista FOB 19, 240–243, https://doi.org/10.1590/S1678-77572011000300011 (2011). [DOI: 10.1590/S1678-77572011000300011]
  29. Gu, L., Zhu, C., Chen, K., Liu, X. & Tang, Z. Anatomic study of the position of the mandibular canal and corresponding mandibular third molar on cone-beam computed tomography images. Surgical and radiologic anatomy: SRA 40, 609–614, https://doi.org/10.1007/s00276-017-1928-6 (2018). [DOI: 10.1007/s00276-017-1928-6]
  30. Kroon, D.-J. Segmentation of the mandibular canal in cone-beam CT data. (2011).
  31. Abdolali, F. et al. Automatic segmentation of mandibular canal in cone beam CT images using conditional statistical shape model and fast marching. International journal of computer assisted radiology and surgery 12, 581–593, https://doi.org/10.1007/s11548-016-1484-2 (2017). [DOI: 10.1007/s11548-016-1484-2]
  32. Gerlach, N. L. et al. Evaluation of the potential of automatic segmentation of the mandibular canal using cone-beam computed tomography. The British journal of oral & maxillofacial surgery 52, 838–844, https://doi.org/10.1016/j.bjoms.2014.07.253 (2014). [DOI: 10.1016/j.bjoms.2014.07.253]
  33. Razzak, M. I., Naz, S. & Zaib, A. In Classification in BioApps 323–350 (Springer (2018).
  34. Roy, S., Krishna, G., Dubey, S. R. & Chaudhuri, B. HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification. (2019).

MeSH Term

Adolescent
Adult
Aged
Aged, 80 and over
Cone-Beam Computed Tomography
Deep Learning
Female
Humans
Imaging, Three-Dimensional
Male
Mandible
Mandibular Nerve
Middle Aged
Neural Networks, Computer
Patient Care Planning
Temporomandibular Joint Disorders
Young Adult

Word Cloud

Created with Highcharts 10.0.0deepdetection2DlearningcanalU-Netglobalaccuracy0techniquesfieldsautomaticautomatedmandibularnervesegmentationusingimagesautomationSegNet3DpreliminarydentalshowedwillpracticabilitydemonstratedsuccessfulimplementationvariedincludingdiagnosticimagingcliniciansaccordanceincreasingdemandshealthcareindustrypredictionwidelyresearchedParticularlydentistryvariousreasonsbecomehighlydesirablepositioninginferioralveolarIANonemajorstructuresmandiblecrucialpreventinjurysurgicalproceduresHoweverConebeamcomputedtomographyCBCTposescertaindifficultiescomplexappearancehumanskulllimitednumberdatasetsunclearedgesnoisyUsingwork-in-progresssoftwareexperimentsconductedmodelsbasedU-Netsresearchtooladjacentdemonstrateshigher82naïvevariantssecondhighest96best99systemcontributesignificantlyefficienttreatmentplanningreducingpatients'discomfortdentiststudyreportopportunityexploreapplicationAutomaticconvolutionalneuralnetwork

Similar Articles

Cited By