⚡ Quick Summary
A set of globally recognized guidelines has been released to mitigate the risk of bias in artificial intelligence (AI) used in healthcare technologies. These recommendations are part of the STANDING Together initiative and were published in The Lancet Digital Health and NEJM-AI on December 18, 2024.
💡 Objectives of the Guidelines
- Ensure that datasets for training and testing medical AI systems reflect the diversity of the populations they serve.
- Promote the development of medical AI using comprehensive healthcare datasets that include underrepresented and underserved groups.
- Assist data publishers in identifying biases or limitations within their datasets.
- Establish testing protocols for AI technologies to detect bias and ensure equitable performance across different demographic groups.
👩⚕️ Research Background
- The STANDING Together initiative is spearheaded by researchers from University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham.
- The study involved over 350 experts from 58 countries, highlighting the global concern regarding bias in medical AI.
📅 Expert Insights
Dr. Xiao Liu, an associate professor at the University of Birmingham, emphasized the importance of addressing the root causes of bias in data, stating, “To create lasting change in health equity, we must focus on fixing the source, not just the reflection.”
🚀 Importance of Diverse Data
While AI technologies have the potential to enhance diagnosis and treatment, they can also perpetuate existing biases, particularly affecting minority groups who are often underrepresented in datasets. This can lead to disparities in healthcare outcomes.
🏥 Collaborative Efforts
- The two-year research project included collaboration with over 30 institutions worldwide, including universities, regulators, patient groups, and health technology companies.
- Funding was provided by The Health Foundation and the NHS AI Lab, with support from the National Institute for Health and Care Research.
🔗 Global Impact
Sir Jeremy Farrar, chief scientist at the World Health Organization, stated that ensuring diverse and representative datasets is a global priority. The STANDING Together recommendations are seen as a significant advancement toward achieving equity in health AI.
📢 Future Directions
The guidelines are expected to assist regulatory agencies, health policy organizations, funding bodies, and ethical review committees in addressing bias in medical AI. Additionally, a commentary published in Nature Medicine emphasizes the need for public involvement in shaping medical AI research.
🔗 Sources
- Bias in AI-based models for medical applications: challenges and mitigation strategies | npj Digital Medicine
- Overcoming AI Bias: Understanding, Identifying and Mitigating Algorithmic Bias in Healthcare – Accuray
- How AI Bias Is Impacting Healthcare
- Rooting Out AI’s Biases | Hopkins Bloomberg Public Health Magazine
- Addressing bias in big data and AI for health care: A call for open science – ScienceDirect