The Data Artifacts Glossary: a community-based repository for bias on health datasets.
Rodrigo R Gameiro, Naira Link Woite, Christopher M Sauer, Sicheng Hao, Chrystinne Oliveira Fernandes, Anna E Premo, Alice Rangel Teixeira, Isabelle Resli, An-Kwok Ian Wong, Leo Anthony Celi
Author Information
Rodrigo R Gameiro: Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA.
Naira Link Woite: Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA.
Christopher M Sauer: Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA.
Sicheng Hao: Division of Pulmonary, Allergy, and Critical Care Medicine, Duke University, Durham, NC, USA.
Chrystinne Oliveira Fernandes: Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA.
Anna E Premo: Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA, USA.
Alice Rangel Teixeira: Department of Philosophy, Universitat Aut��noma de Barcelona, Barcelona, Spain.
Isabelle Resli: School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, USA.
An-Kwok Ian Wong: Division of Pulmonary, Allergy, and Critical Care Medicine, Duke University, Durham, NC, USA.
Leo Anthony Celi: Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA. lceli@mit.edu. ORCID
BACKGROUND: The deployment of Artificial Intelligence (AI) in healthcare has the potential to transform patient care through improved diagnostics, personalized treatment plans, and more efficient resource management. However, the effectiveness and fairness of AI are critically dependent on the data it learns from. Biased datasets can lead to AI outputs that perpetuate disparities, particularly affecting social minorities and marginalized groups. OBJECTIVE: This paper introduces the "Data Artifacts Glossary", a dynamic, open-source framework designed to systematically document and update potential biases in healthcare datasets. The aim is to provide a comprehensive tool that enhances the transparency and accuracy of AI applications in healthcare and contributes to understanding and addressing health inequities. METHODS: Utilizing a methodology inspired by the Delphi method, a diverse team of experts conducted iterative rounds of discussions and literature reviews. The team synthesized insights to develop a comprehensive list of bias categories and designed the glossary's structure. The Data Artifacts Glossary was piloted using the MIMIC-IV dataset to validate its utility and structure. RESULTS: The Data Artifacts Glossary adopts a collaborative approach modeled on successful open-source projects like Linux and Python. Hosted on GitHub, it utilizes robust version control and collaborative features, allowing stakeholders from diverse backgrounds to contribute. Through a rigorous peer review process managed by community members, the glossary ensures the continual refinement and accuracy of its contents. The implementation of the Data Artifacts Glossary with the MIMIC-IV dataset illustrates its utility. It categorizes biases, and facilitates their identification and understanding. CONCLUSION: The Data Artifacts Glossary serves as a vital resource for enhancing the integrity of AI applications in healthcare by providing a mechanism to recognize and mitigate dataset biases before they impact AI outputs. It not only aids in avoiding bias in model development but also contributes to understanding and addressing the root causes of health disparities.