Using Natural Language Processing to Identify Stigmatizing Language in Labor and Birth Clinical Notes

Veronica Barcelona*, Danielle Scharp, Hans Moen, Anahita Davoudi, Betina R. Idnay, Kenrick Cato, Maxim Topaz

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

6 Citations (Scopus)
57 Downloads (Pure)

Abstract

Introduction: Stigma and bias related to race and other minoritized statuses may underlie disparities in pregnancy and birth outcomes. One emerging method to identify bias is the study of stigmatizing language in the electronic health record. The objective of our study was to develop automated natural language processing (NLP) methods to identify two types of stigmatizing language: marginalizing language and its complement, power/privilege language, accurately and automatically in labor and birth notes. Methods: We analyzed notes for all birthing people > 20 weeks’ gestation admitted for labor and birth at two hospitals during 2017. We then employed text preprocessing techniques, specifically using TF-IDF values as inputs, and tested machine learning classification algorithms to identify stigmatizing and power/privilege language in clinical notes. The algorithms assessed included Decision Trees, Random Forest, and Support Vector Machines. Additionally, we applied a feature importance evaluation method (InfoGain) to discern words that are highly correlated with these language categories. Results: For marginalizing language, Decision Trees yielded the best classification with an F-score of 0.73. For power/privilege language, Support Vector Machines performed optimally, achieving an F-score of 0.91. These results demonstrate the effectiveness of the selected machine learning methods in classifying language categories in clinical notes. Conclusion: We identified well-performing machine learning methods to automatically detect stigmatizing language in clinical notes. To our knowledge, this is the first study to use NLP performance metrics to evaluate the performance of machine learning methods in discerning stigmatizing language. Future studies should delve deeper into refining and evaluating NLP methods, incorporating the latest algorithms rooted in deep learning.

Original languageEnglish
Pages (from-to)578–586
JournalMaternal and Child Health Journal
Volume28
Issue number3
Early online date26 Dec 2023
DOIs
Publication statusPublished - Mar 2024
MoE publication typeA1 Journal article-refereed

Keywords

  • Bias
  • Electronic health records
  • Natural language processing

Fingerprint

Dive into the research topics of 'Using Natural Language Processing to Identify Stigmatizing Language in Labor and Birth Clinical Notes'. Together they form a unique fingerprint.

Cite this