๐Ÿง‘๐Ÿผโ€๐Ÿ’ป Research - May 6, 2025

AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity.

๐ŸŒŸ Stay Updated!
Join AI Health Hub to receive the latest insights in health and AI.

โšก Quick Summary

A recent study evaluated the demographic profiles of simulation instructors and heads of simulation labs using three AI platforms: ChatGPT, Gemini, and Claude. The findings revealed significant biases in AI-generated profiles, highlighting the need for ethical AI development to promote inclusivity in healthcare education.

๐Ÿ” Key Details

  • ๐Ÿ“Š Dataset: 2014 instructors and 1880 lab heads across nine global locations
  • ๐Ÿงฉ AI Platforms Used: ChatGPT, Gemini, Claude
  • โš™๏ธ Study Duration: 5 days in November 2024
  • ๐Ÿ† Statistical Methods: ANOVA and chi-square tests with Bonferroni corrections

๐Ÿ”‘ Key Takeaways

  • ๐Ÿ“Š Significant demographic differences were observed among the AI platforms.
  • ๐Ÿ‘ด Claude profiles depicted older lab heads (mean age: 57 years) compared to instructors (mean age: 41 years).
  • โš–๏ธ Gender representation varied, with Claude showing a male predominance (63.5%) among lab heads.
  • ๐ŸŒˆ ChatGPT and Gemini reflected greater racial diversity, with up to 24.4% Black and 20.6% Hispanic/Latin representation.
  • ๐Ÿ” Specialty preferences differed, with Claude favoring anesthesiology and surgery.
  • ๐Ÿ’ก Ethical AI development is essential to address biases in healthcare education.
  • ๐ŸŒ The study emphasizes the importance of diverse leadership in simulation-based medical education.

๐Ÿ“š Background

Simulation-based medical education (SBME) plays a crucial role in shaping healthcare professionals’ skills and identities. However, the demographics of leadership in SBME can significantly influence program design and learner outcomes. With the rise of artificial intelligence (AI) in generating demographic data, it is vital to assess the potential biases that may perpetuate inequities in representation.

๐Ÿ—’๏ธ Study

This global cross-sectional study aimed to evaluate the demographic profiles of simulation instructors and heads of simulation labs generated by AI platforms. Conducted over five days in November 2024, standardized English prompts were used to gather demographic data, including age, gender, race/ethnicity, and medical specialties from ChatGPT, Gemini, and Claude.

๐Ÿ“ˆ Results

The analysis revealed that Claude generated profiles that skewed towards older, predominantly White, and male lab heads, while ChatGPT and Gemini provided more balanced and diverse representations. The findings underscore the importance of recognizing and addressing biases in AI-generated data, as these can have significant implications for healthcare education and training.

๐ŸŒ Impact and Implications

The results of this study highlight the critical need for ethical AI development and enhanced AI literacy among educators and leaders in healthcare. By promoting diverse leadership in SBME, we can foster more equitable and inclusive training environments, ultimately improving healthcare outcomes for all communities. The implications of these findings extend beyond education, influencing the broader landscape of healthcare equity.

๐Ÿ”ฎ Conclusion

This study sheds light on the biases present in AI-generated demographic profiles within simulation-based medical education. The disparities observed between the AI platforms call for a concerted effort to address these biases through ethical practices and a commitment to diversity. As we move forward, it is essential to prioritize inclusivity in healthcare education to ensure that all voices are represented and valued.

๐Ÿ’ฌ Your comments

What are your thoughts on the role of AI in shaping diversity in healthcare education? Let’s engage in a discussion! ๐Ÿ’ฌ Share your insights in the comments below or connect with us on social media:

AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity.

Abstract

BACKGROUND: Simulation-based medical education (SBME) is a critical training tool in healthcare, shaping learners’ skills, professional identities, and inclusivity. Leadership demographics in SBME, including age, gender, race/ethnicity, and medical specialties, influence program design and learner outcomes. Artificial intelligence (AI) platforms increasingly generate demographic data, but their biases may perpetuate inequities in representation. This study evaluated the demographic profiles of simulation instructors and heads of simulation labs generated by three AI platforms-ChatGPT, Gemini, and Claude-across nine global locations.
METHODS: A global cross-sectional study was conducted over 5ย days (November 2024). Standardized English prompts were used to generate demographic profiles of simulation instructors and heads of simulation labs from ChatGPT, Gemini, and Claude. Outputs included age, gender, race/ethnicity, and medical specialty data for 2014 instructors and 1880 lab heads. Statistical analyses included ANOVA for continuous variables and chi-square tests for categorical data, with Bonferroni corrections for multiple comparisons: P significantโ€‰<โ€‰0.05. RESULTS: Significant demographic differences were observed among AI platforms. Claude profiles depicted older heads of simulation labs (mean: 57ย years) compared to instructors (mean: 41ย years), while ChatGPT and Gemini showed smaller age gaps. Gender representation varied, with ChatGPT and Gemini generating balanced profiles, while Claude showed a male predominance (63.5%) among lab heads. ChatGPT and Gemini outputs reflected greater racial diversity, with up to 24.4% Black and 20.6% Hispanic/Latin representation, while Claude predominantly featured White profiles (47.8%). Specialty preferences also differed, with Claude favoring anesthesiology and surgery, whereas ChatGPT and Gemini offered broader interdisciplinary representation. CONCLUSIONS: AI-generated demographic profiles of SBME leadership reveal biases that may reinforce inequities in healthcare education. ChatGPT and Gemini demonstrated broader diversity in age, gender, and race, while Claude skewed towards older, White, and male profiles, particularly for leadership roles. Addressing these biases through ethical AI development, enhanced AI literacy, and promoting diverse leadership in SBME are essential to fostering equitable and inclusive training environments. TRIAL REGISTRATION: Not applicable. This study exclusively used AI-generated synthetic data.

Author: [‘Berger-Estilita J’, ‘Gisselbaek M’, ‘Devos A’, ‘Chan A’, ‘Ingrassia PL’, ‘Meco BC’, ‘Chang OLB’, ‘Savoldelli GL’, ‘Matos FM’, ‘Dieckmann P’, ‘ร˜stergaard D’, ‘Saxena S’]

Journal: Adv Simul (Lond)

Citation: Berger-Estilita J, et al. AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity. AI and inclusion in simulation education and leadership: a global cross-sectional evaluation of diversity. 2025; 10:26. doi: 10.1186/s41077-025-00355-1

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.