Study Overview
Students enrolled in the Doctor of Pharmacy program at the University of Arizona have demonstrated superior performance compared to ChatGPT 3.5, an artificial intelligence model, in answering therapeutics examination questions.
Key Findings
- ChatGPT scored only 51% overall on the exam questions.
- Performance on application-based questions was notably low at 44%.
- In contrast, pharmacy students achieved a mean score of 82%.
- ChatGPT performed better on factual recall questions (80%) than on case-based questions (45%).
Research Insights
The study, published in Currents in Pharmacy Teaching and Learning, highlights the limitations of AI in clinical scenarios where critical thinking and judgment are essential. Christopher Edwards, PharmD, noted:
“AI has many potential uses in health care and education, and itβs not going away. However, students can excel in exams through diligent study without relying on these tools.”
Study Methodology
The research team evaluated responses to 210 questions from six exams across two pharmacotherapeutics courses. The questions covered various topics, including:
- Nonprescription medications for conditions like heartburn and allergies.
- Advanced topics in cardiology and neurology.
Implications for Education
As discussions continue regarding the role of AI in academic medicine, concerns arise about over-reliance on technology potentially hindering the development of essential reasoning and critical thinking skills among students. Both Edwards and Interim Dean Brian Erstad, PharmD, acknowledged that advancements in AI could alter these outcomes in the future.
Conclusion
This study underscores the importance of traditional education methods in pharmacy training, emphasizing that while AI can assist in learning, it cannot replace the comprehensive understanding and skills developed through rigorous study.