Topic Modeling for Interpretable Text Classification From EHRs.

Emil Rijcken, Uzay Kaymak, Floortje Scheepers, Pablo Mosteiro, Kalliopi Zervanou, Marco Spruit
Author Information
  1. Emil Rijcken: Jheronimus Academy of Data Science, Eindhoven University of Technology, Eindhoven, Netherlands.
  2. Uzay Kaymak: Jheronimus Academy of Data Science, Eindhoven University of Technology, Eindhoven, Netherlands.
  3. Floortje Scheepers: University Medical Center Utrecht, Utrecht, Netherlands.
  4. Pablo Mosteiro: Department of Information and Computing Sciences, Utrecht University, Utrecht, Netherlands.
  5. Kalliopi Zervanou: Public Health and Primary Care (PHEG), Leiden University Medical Center, Leiden University, Leiden, Netherlands.
  6. Marco Spruit: Department of Information and Computing Sciences, Utrecht University, Utrecht, Netherlands.

Abstract

The clinical notes in electronic health records have many possibilities for predictive tasks in text classification. The interpretability of these classification models for the clinical domain is critical for decision making. Using topic models for text classification of electronic health records for a predictive task allows for the use of topics as features, thus making the text classification more interpretable. However, selecting the most effective topic model is not trivial. In this work, we propose considerations for selecting a suitable topic model based on the predictive performance and interpretability measure for text classification. We compare 17 different topic models in terms of both interpretability and predictive performance in an inpatient violence prediction task using clinical notes. We find no correlation between interpretability and predictive performance. In addition, our results show that although no model outperforms the other models on both variables, our proposed fuzzy topic modeling algorithm (FLSA-W) performs best in most settings for interpretability, whereas two state-of-the-art methods (ProdLDA and LSI) achieve the best predictive performance.

Keywords

References

  1. Neural Comput. 1995 Sep;7(5):889-904 [PMID: 7584891]
  2. Scientometrics. 2010 Aug;84(2):523-538 [PMID: 20585380]
  3. JAMA Netw Open. 2019 Jul 3;2(7):e196972 [PMID: 31298717]
  4. Transl Psychiatry. 2016 Oct 18;6(10):e921 [PMID: 27754482]
  5. J Biomed Inform. 2018 Oct;86:49-58 [PMID: 30118855]
  6. JAMA Netw Open. 2019 Jul 3;2(7):e196709 [PMID: 31268542]

Word Cloud

Created with Highcharts 10.0.0predictiveclassificationinterpretabilitytopictextmodelsperformanceclinicalelectronichealthrecordsmodelnotesmakingtaskselectingmodelingbestmanypossibilitiestasksdomaincriticaldecisionUsingallowsusetopicsfeaturesthusinterpretableHowevereffectivetrivialworkproposeconsiderationssuitablebasedmeasurecompare17differenttermsinpatientviolencepredictionusingfindcorrelationadditionresultsshowalthoughoutperformsvariablesproposedfuzzyalgorithmFLSA-Wperformssettingswhereastwostate-of-the-artmethodsProdLDALSIachieveTopicModelingInterpretableTextClassificationEHRsexplainabilityinformationextractionnaturallanguageprocessingpsychiatry

Similar Articles

Cited By (7)