Does the magic of BERT apply to medical code assignment? A quantitative study

Shaoxiong Ji, Matti Hölttä, Pekka Marttinen

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

44 Sitaatiot (Scopus)
123 Lataukset (Pure)

Abstrakti

Unsupervised pretraining is an integral part of many natural language processing systems, and transfer learning with language models has achieved remarkable results in downstream tasks. In the clinical application of medical code assignment, diagnosis and procedure codes are inferred from lengthy clinical notes such as hospital discharge summaries. However, it is not clear if pretrained models are useful for medical code prediction without further architecture engineering. This paper conducts a comprehensive quantitative analysis of various contextualized language models' performances, pretrained in different domains, for medical code assignment from clinical notes. We propose a hierarchical fine-tuning architecture to capture interactions between distant words and adopt label-wise attention to exploit label information. Contrary to current trends, we demonstrate that a carefully trained classical CNN outperforms attention-based models on a MIMIC-III subset with frequent codes. Our empirical findings suggest directions for building robust medical code assignment models.
AlkuperäiskieliEnglanti
Artikkeli104998
JulkaisuComputers in Biology and Medicine
Vuosikerta139
Varhainen verkossa julkaisun päivämäärälokak. 2021
DOI - pysyväislinkit
TilaJulkaistu - jouluk. 2021
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Sormenjälki

Sukella tutkimusaiheisiin 'Does the magic of BERT apply to medical code assignment? A quantitative study'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä