Follow us
pubmed meta image 2
🧑🏼‍💻 Research - October 22, 2024

Artificial intelligence-generated feedback on social signals in patient-provider communication: technical performance, feedback usability, and impact.

🌟 Stay Updated!
Join Dr. Ailexa’s channels to receive the latest insights in health and AI.

⚡ Quick Summary

This study explored the use of artificial intelligence (AI) to assess and provide feedback on nonverbal social signals in patient-provider communication during primary care visits. The findings indicate that AI can effectively identify key social cues, enhancing clinician awareness of implicit bias and promoting more equitable healthcare interactions.

🔍 Key Details

  • 📊 Dataset: 145 primary care visits analyzed
  • 👩‍⚕️ Participants: 24 clinicians engaged for feedback design
  • ⚙️ Technology: Machine learning models for social signal detection
  • 🏆 Evaluation: Impact assessed in a cohort of 108 primary care visits

🔑 Key Takeaways

  • 🤖 AI models successfully identified social signals like dominance, warmth, and engagement.
  • 📈 Clinician feedback indicated a preference for personalized dashboards for AI-generated insights.
  • 💡 Nonverbal cues were found challenging to interpret, highlighting the need for clearer feedback mechanisms.
  • ⚖️ Fairness was demonstrated across all AI models, with improved generalizability for provider dominance and patient warmth.
  • 🔍 Implicit bias in clinicians correlated with less provider dominance and warmth during interactions.
  • 🌐 Interest in AI approaches was high, but clinicians suggested enhancements for better implementation.
  • 🏥 Potential applications include telehealth and medical education contexts.

📚 Background

Implicit bias in healthcare can lead to significant inequities, particularly in patient-provider interactions. Nonverbal social cues, such as dominance and warmth, play a crucial role in these interactions. The integration of AI technology offers a promising avenue to assess and improve communication, ultimately fostering more equitable healthcare experiences.

🗒️ Study

The study involved a comprehensive approach to evaluate AI’s capability in detecting social signals during primary care visits. Researchers built a machine-learning pipeline to analyze interactions from 145 visits and engaged 24 clinicians to ensure the feedback generated was usable and relevant to their workflows. The impact of this AI-driven feedback was then assessed in a prospective cohort of 108 visits.

📈 Results

The results demonstrated that AI models could effectively identify various social signals in patient-provider communication. Clinicians expressed a preference for receiving feedback through personalized dashboards, although they found interpreting nonverbal cues challenging. Notably, the study revealed that stronger implicit race bias among clinicians was linked to decreased provider dominance and warmth, underscoring the importance of addressing bias in healthcare settings.

🌍 Impact and Implications

The findings from this study highlight the potential of AI-driven communication assessment tools to enhance clinician awareness of implicit bias. By focusing on social signals, these tools can promote patient-centered and equitable healthcare interactions. Future research should aim to refine these AI models, personalize feedback, and explore their implementation in educational settings, paving the way for improved healthcare delivery.

🔮 Conclusion

This study illustrates the promising role of AI in assessing communication dynamics within healthcare. By leveraging technology to enhance clinician awareness of implicit bias, we can foster more equitable interactions between patients and providers. Continued exploration and refinement of these AI tools will be essential in realizing their full potential in healthcare settings.

💬 Your comments

What are your thoughts on the integration of AI in patient-provider communication? We invite you to share your insights and engage in a discussion! 💬 Leave your comments below or connect with us on social media:

Artificial intelligence-generated feedback on social signals in patient-provider communication: technical performance, feedback usability, and impact.

Abstract

OBJECTIVES: Implicit bias perpetuates health care inequities and manifests in patient-provider interactions, particularly nonverbal social cues like dominance. We investigated the use of artificial intelligence (AI) for automated communication assessment and feedback during primary care visits to raise clinician awareness of bias in patient interactions.
MATERIALS AND METHODS: (1) Assessed the technical performance of our AI models by building a machine-learning pipeline that automatically detects social signals in patient-provider interactions from 145 primary care visits. (2) Engaged 24 clinicians to design usable AI-generated communication feedback for their workflow. (3) Evaluated the impact of our AI-based approach in a prospective cohort of 108 primary care visits.
RESULTS: Findings demonstrate the feasibility of AI models to identify social signals, such as dominance, warmth, engagement, and interactivity, in nonverbal patient-provider communication. Although engaged clinicians preferred feedback delivered in personalized dashboards, they found nonverbal cues difficult to interpret, motivating social signals as an alternative feedback mechanism. Impact evaluation demonstrated fairness in all AI models with better generalizability of provider dominance, provider engagement, and patient warmth. Stronger clinician implicit race bias was associated with less provider dominance and warmth. Although clinicians expressed overall interest in our AI approach, they recommended improvements to enhance acceptability, feasibility, and implementation in telehealth and medical education contexts.
DISCUSSION AND CONCLUSION: Findings demonstrate promise for AI-driven communication assessment and feedback systems focused on social signals. Future work should improve the performance of this approach, personalize models, and contextualize feedback, and investigate system implementation in educational workflows. This work exemplifies a systematic, multistage approach for evaluating AI tools designed to raise clinician awareness of implicit bias and promote patient-centered, equitable health care interactions.

Author: [‘Bedmutha MS’, ‘Bascom E’, ‘Sladek KR’, ‘Tobar K’, ‘Casanova-Perez R’, ‘Andreiu A’, ‘Bhat A’, ‘Mangal S’, ‘Wood BR’, ‘Sabin J’, ‘Pratt W’, ‘Weibel N’, ‘Hartzler AL’]

Journal: JAMIA Open

Citation: Bedmutha MS, et al. Artificial intelligence-generated feedback on social signals in patient-provider communication: technical performance, feedback usability, and impact. Artificial intelligence-generated feedback on social signals in patient-provider communication: technical performance, feedback usability, and impact. 2024; 7:ooae106. doi: 10.1093/jamiaopen/ooae106

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.