Multi-label topic classification for COVID-19 literature with Bioformer.

Li Fang, Kai Wang
Author Information
  1. Li Fang: Raymond G. Perelman Center for Cellular and Molecular Therapeutics, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA.
  2. Kai Wang: Raymond G. Perelman Center for Cellular and Molecular Therapeutics, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA.

Abstract

We describe Bioformer team's participation in the multi-label topic classification task for COVID-19 literature (track 5 of BioCreative VII). Topic classification is performed using different BERT models (BioBERT, PubMedBERT, and Bioformer). We formulate the topic classification task as a sentence pair classification problem, where the title is the first sentence, and the abstract is the second sentence. Our results show that Bioformer outperforms BioBERT and PubMedBERT in this task. Compared to the baseline results, our best model increased micro, macro, and instance-based F1 score by 8.8%, 15.5%, 7.4%, respectively. Bioformer achieved the highest micro F1 and macro F1 scores in this challenge. In post-challenge experiments, we found that pretraining of Bioformer on COVID-19 articles further improves the performance.

References

  1. Nature. 2020 Dec;588(7839):553 [PMID: 33328621]
  2. Nucleic Acids Res. 2021 Jan 8;49(D1):D1534-D1540 [PMID: 33166392]
  3. J Am Med Inform Assoc. 2019 Nov 1;26(11):1279-1285 [PMID: 31233120]
  4. Bioinformatics. 2020 Feb 15;36(4):1234-1240 [PMID: 31501885]
  5. Nature. 2020 Mar;579(7798):193 [PMID: 32157233]
  6. Database (Oxford). 2022 Aug 31;2022: [PMID: 36043400]

Grants

  1. R01 LM012895/NLM NIH HHS

Word Cloud

Created with Highcharts 10.0.0BioformerclassificationtopictaskCOVID-19sentenceF1literatureBioBERTPubMedBERTresultsmicromacrodescribeteam'sparticipationmulti-labeltrack5BioCreativeVIITopicperformedusingdifferentBERTmodelsformulatepairproblemtitlefirstabstractsecondshowoutperformsComparedbaselinebestmodelincreasedinstance-basedscore88%155%74%respectivelyachievedhighestscoreschallengepost-challengeexperimentsfoundpretrainingarticlesimprovesperformanceMulti-label

Similar Articles

Cited By