Study Overview
A recent survey conducted by Duke Health revealed that patients generally prefer messages generated by artificial intelligence (AI) over those written by human clinicians. However, this preference slightly decreases when patients are informed that AI was involved in the message creation.
Key Findings
- The study, published on March 11 in JAMA Network Open, involved over 1,400 participants from the Duke University Health System patient advisory committee.
- Participants rated their satisfaction with responses to three clinical scenarios:
- Routine medication refill requests (low seriousness)
- Medication side effect inquiries (moderate seriousness)
- Potential cancer diagnoses (high seriousness)
- AI-generated messages scored an average of 0.30 points higher than human-drafted responses on a 5-point satisfaction scale.
- When participants were informed that AI was used, their satisfaction decreased by 0.1 points, indicating a slight impact of disclosure on their overall satisfaction.
Implications for Healthcare Communication
According to Dr. Anand Chowdhury, the senior author of the study, the findings highlight the ongoing challenge for healthcare systems regarding the transparency of AI usage. He stated:
“There is a desire to be transparent, and a desire to have satisfied patients. If we disclose AI, what do we lose?”
The study suggests that while patients appreciate the efficiency and detail of AI-generated messages, maintaining transparency about AI’s involvement is crucial for sustaining patient trust.
Conclusion
The research underscores the potential of AI to enhance healthcare communication while also addressing clinician burnout. By automating routine communications, healthcare providers can focus more on complex patient care, ultimately improving both provider well-being and patient outcomes.
Further Reading
For more information, refer to the original study: Ethics in Patient Preferences for Artificial Intelligence-Drafted Responses to Electronic Messages.