โก Quick Summary
A recent pilot study evaluated the use of ChatGPT for obtaining medical histories in a pediatric emergency department, demonstrating high usability and satisfaction among participants. The findings suggest that this AI-driven approach can enhance patient engagement and streamline documentation processes.
๐ Key Details
- ๐ Participants: 31 patients with Emergency Severity Index scores 3 to 5
- ๐ค Technology: HIPAA-compliant instance of ChatGPT-4
- ๐ Methodology: Mixed-methods pilot study collecting histories in parallel with usual care
- ๐ Rating Scale: 11-point scale for usability, satisfaction, and quality
๐ Key Takeaways
- ๐ก High Usability: Median score of 10 (IQR 8 to 10) for usability.
- ๐ Participant Satisfaction: Median satisfaction score of 8 (IQR 7 to 10).
- ๐ Quality Ratings: Overall quality rated at 9 (IQR 8 to 10).
- ๐ Physician Review: Favorable ratings across accuracy, completeness, efficiency, readability, and overall satisfaction.
- ๐ No Hallucinations: Clinicians noted no inaccuracies in the final transcripts.
- ๐ฃ๏ธ Positive Feedback: Participants appreciated the ease of use and objectivity of the AI system.
- ๐ฅ Potential Benefits: Could reduce documentation burden and enhance patient engagement.

๐ Background
In the fast-paced environment of emergency departments, obtaining accurate medical histories is crucial yet often challenging. Traditional methods can be time-consuming and may not fully engage patients or caregivers. The integration of artificial intelligence in this process presents an innovative solution, potentially improving efficiency and patient satisfaction.
๐๏ธ Study
This pilot study was conducted in a pediatric emergency department, where 31 patients with varying severity scores participated. Utilizing a HIPAA-compliant version of ChatGPT-4, the study aimed to assess the feasibility and acceptability of AI-driven history taking. Participants rated their experiences, while pediatric emergency physicians evaluated the generated summaries for various quality metrics.
๐ Results
The results were promising, with participants reporting high usability and satisfaction scores. The median usability score was 10, while satisfaction and quality ratings were also notably high, indicating that the AI-generated histories were perceived as accurate and helpful. Physician reviewers echoed these sentiments, providing favorable ratings across all assessed domains, particularly in readability.
๐ Impact and Implications
The implications of this study are significant. By leveraging AI for early retrieval of medical histories, emergency departments could see a reduction in documentation burdens, allowing healthcare providers to focus more on patient care. This approach not only enhances patient engagement but also supports a more streamlined triage process, ultimately improving the overall efficiency of emergency care.
๐ฎ Conclusion
This pilot study highlights the feasibility and acceptability of using AI-driven tools like ChatGPT in emergency settings. The positive feedback from both patients and physicians suggests a promising future for AI in healthcare, particularly in enhancing patient history collection. Continued research and development in this area could lead to broader applications and improved patient outcomes.
๐ฌ Your comments
What are your thoughts on the integration of AI in emergency medicine? We would love to hear your insights! ๐ฌ Share your comments below or connect with us on social media:
A Pilot Study to Evaluate Artificial Intelligence-Driven Early Retrieval of Medical Histories in the Emergency Department.
Abstract
STUDY OBJECTIVE: To assess the feasibility and acceptability of using ChatGPT to obtain histories of present illnesses directly from patients or caregivers in a pediatric emergency department waiting room in a pilot study.
METHODS: In a prospective mixed-methods pilot study, patients (n=31) with Emergency Severity Index scores 3 to 5 used a HIPAA-compliant instance of ChatGPT-4 to generate histories of present illnesses which were collected in parallel with usual care and were not used to inform clinical decision-making. Participants used an 11-point scale to rate their experiences around usability and satisfaction and rated the quality of the generated history of present illness. Thematic analysis of open-ended responses was performed. Pediatric emergency physicians reviewed summaries for accuracy, completeness, efficiency, readability and overall satisfaction, using an 11-point rating scale.
RESULTS: Participants reported high usability (median 10, interquartile range 8 to 10), satisfaction (8 and 7 to 10), and overall quality (9 and 8 to 10) of the summary history of present illness. Participants particularly commented on the ease of use, the objectivity of the system, and the accuracy of the summary. Physician reviewers gave favorable ratings across all 5 domains of accuracy, completeness, efficiency, readability, and overall satisfaction, with the highest for readability (9 and 7 to 9), and all others with a median score of 8/10. Clinicians noted that the summaries sometimes lacked important features of prior visits. They noted no hallucinations in the final transcripts.
CONCLUSION: This pilot study of ChatGPT-enabled patient history taking in the pediatric emergency department waiting room was feasible, well accepted, and produced accurate, complete, readable, and efficient summaries. This approach has potential to reduce documentation burden, enhance patient engagement, and support more streamlined triage.
Author: [‘Morley-Fletcher A’, ‘Raghavan VR’, ‘Geanacopoulos AT’, ‘Maher M’, ‘Waltzman M’, ‘Barak Corren Y’, ‘Fine AM’]
Journal: Ann Emerg Med
Citation: Morley-Fletcher A, et al. A Pilot Study to Evaluate Artificial Intelligence-Driven Early Retrieval of Medical Histories in the Emergency Department. A Pilot Study to Evaluate Artificial Intelligence-Driven Early Retrieval of Medical Histories in the Emergency Department. 2026; (unknown volume):(unknown pages). doi: 10.1016/j.annemergmed.2026.01.022