⚡ Quick Summary
A recent study evaluated the accuracy of ChatGPT in answering common patient questions about HPV+ oropharyngeal carcinoma (OPC). The findings revealed that while ChatGPT provided accurate responses for many questions, it struggled significantly with evolving therapies and diagnostics, highlighting the need for caution in using AI for medical inquiries.
🔍 Key Details
- 📊 Study Design: Prospective, multi-institutional study
- 🏥 Institutions: High volume centers performing >50 transoral robotic surgeries annually
- 🔍 Data Source: 100 discussion threads from the American Cancer Society’s Cancer Survivors Network
- 🤖 AI Model: ChatGPT 3.5
- 👨⚕️ Participants: 8 fellowship-trained head and neck oncologic surgeons
🔑 Key Takeaways
- 📈 Accuracy Rates: ChatGPT was accurate 84.4% for HPV contraction questions and 87.5% for treatment inquiries.
- 💉 HPV Vaccine Responses: Accuracy was lower at 62.5% for vaccine-related questions.
- 🧬 Tumor DNA Testing: Only 12.5% of surgeons found ChatGPT’s responses accurate.
- ⚠️ Caution Advised: ChatGPT’s performance was notably poor for advanced therapies and diagnostics.
- 👩⚕️ Surgeon Consultation: Patients are encouraged to consult their surgeons for accurate medical advice.
- 🌐 Importance of AI: ChatGPT can be a useful tool for augmenting patient understanding of health topics.
📚 Background
The emergence of large language models (LLMs) like ChatGPT has transformed how patients access health information. However, the accuracy of these AI-driven responses, especially concerning complex medical conditions such as HPV+ oropharyngeal carcinoma, remains a critical concern. Understanding the limitations of AI in healthcare is essential for both patients and providers.
🗒️ Study
This study aimed to assess the reliability of ChatGPT in addressing common patient inquiries about HPV+ OPC. By analyzing responses to the 11 most frequently asked questions, researchers sought to determine how well the AI’s answers aligned with established medical consensus. The study involved collaboration among multiple institutions and feedback from experienced oncologic surgeons.
📈 Results
The results indicated that ChatGPT’s responses were clinically accurate in a majority of cases, particularly regarding HPV contraction and treatment. Specifically, it achieved an accuracy of 84.4% for contraction questions and 87.5% for treatment inquiries. However, its performance dropped significantly for questions about the HPV vaccine and circulating tumor DNA testing, with only 12.5% of surgeons finding the latter responses satisfactory.
🌍 Impact and Implications
The findings of this study underscore the potential of AI tools like ChatGPT to enhance patient education and understanding of health issues. However, they also highlight the necessity for caution when relying on AI for complex medical information. As AI continues to evolve, it is crucial for patients to seek guidance from qualified healthcare professionals to ensure they receive accurate and up-to-date medical advice.
🔮 Conclusion
This study illustrates both the promise and limitations of using AI in healthcare communication. While ChatGPT can provide valuable insights into common patient questions, its shortcomings in addressing advanced medical topics necessitate a careful approach. Patients should view AI as a supplementary resource rather than a replacement for professional medical advice. Continued research and development in this area will be vital for improving the accuracy and reliability of AI in healthcare.
💬 Your comments
What are your thoughts on the use of AI like ChatGPT in healthcare? Do you believe it can effectively support patient education? 💬 Share your insights in the comments below or connect with us on social media:
Evaluating the Accuracy of ChatGPT in Common Patient Questions Regarding HPV+ Oropharyngeal Carcinoma.
Abstract
OBJECTIVES: Large language model (LLM)-based chatbots such as ChatGPT have been publicly available and increasingly utilized by the general public since late 2022. This study sought to investigate ChatGPT responses to common patient questions regarding Human Papilloma Virus (HPV) positive oropharyngeal cancer (OPC).
METHODS: This was a prospective, multi-institutional study, with data collected from high volume institutions that perform >50 transoral robotic surgery cases per year. The 100 most recent discussion threads including the term “HPV” on the American Cancer Society’s Cancer Survivors Network’s Head and Neck Cancer public discussion board were reviewed. The 11 most common questions were serially queried to ChatGPT 3.5; answers were recorded. A survey was distributed to fellowship trained head and neck oncologic surgeons at 3 institutions to evaluate the responses.
RESULTS: A total of 8 surgeons participated in the study. For questions regarding HPV contraction and transmission, ChatGPT answers were scored as clinically accurate and aligned with consensus in the head and neck surgical oncology community 84.4% and 90.6% of the time, respectively. For questions involving treatment of HPV+ OPC, ChatGPT was clinically accurate and aligned with consensus 87.5% and 91.7% of the time, respectively. For questions regarding the HPV vaccine, ChatGPT was clinically accurate and aligned with consensus 62.5% and 75% of the time, respectively. When asked about circulating tumor DNA testing, only 12.5% of surgeons thought responses were accurate or consistent with consensus.
CONCLUSION: ChatGPT 3.5 performed poorly with questions involving evolving therapies and diagnostics-thus, caution should be used when using a platform like ChatGPT 3.5 to assess use of advanced technology. Patients should be counseled on the importance of consulting their surgeons to receive accurate and up to date recommendations, and use LLM’s to augment their understanding of these important health-related topics.
Author: [‘Bellamkonda N’, ‘Farlow JL’, ‘Haring CT’, ‘Sim MW’, ‘Seim NB’, ‘Cannon RB’, ‘Monroe MM’, ‘Agrawal A’, ‘Rocco JW’, ‘McCrary HC’]
Journal: Ann Otol Rhinol Laryngol
Citation: Bellamkonda N, et al. Evaluating the Accuracy of ChatGPT in Common Patient Questions Regarding HPV+ Oropharyngeal Carcinoma. Evaluating the Accuracy of ChatGPT in Common Patient Questions Regarding HPV+ Oropharyngeal Carcinoma. 2024; 133:814-819. doi: 10.1177/00034894241259137