Follow us
pubmed meta image 2
🧑🏼‍💻 Research - October 19, 2024

Performance of ChatGPT in providing patient information about upper tract urothelial carcinoma.

🌟 Stay Updated!
Join Dr. Ailexa’s channels to receive the latest insights in health and AI.

⚡ Quick Summary

A recent study evaluated the performance of ChatGPT in answering patient inquiries about upper tract urothelial carcinoma (UTUC). While the AI provided useful information, it fell short in delivering adequate treatment guidance, with a concerning potential for unsafe responses.

🔍 Key Details

  • 📊 Study Focus: Patient inquiries about UTUC
  • 🧩 Categories: General information, symptoms and diagnosis, treatment, prognosis
  • ⚙️ Evaluation Criteria: Length, comprehensibility, precision, guideline compliance, safety
  • 🏆 Overall Performance: Average score of 3.93 out of 5

🔑 Key Takeaways

  • 📊 ChatGPT scored highest in providing general information with a mean score of 4.14.
  • 💡 Treatment responses received the lowest score of 3.68, indicating potential misleading information.
  • 👩‍⚕️ Safety concerns were noted, with some responses graded as unsafe by urologists.
  • 🏥 ChatGPT can be useful for basic epidemiological and risk factor information.
  • 🌍 Study conducted by a team of urologists, published in Contemporary Oncology.
  • 🆔 PMID: 39421706.

📚 Background

Upper tract urothelial carcinoma (UTUC) is a rare but serious condition that affects the urinary system. Patients often seek information to understand their diagnosis, treatment options, and prognosis. With the rise of artificial intelligence, tools like ChatGPT are being explored for their potential to provide accessible patient information. However, the accuracy and safety of such AI-generated responses remain critical concerns.

🗒️ Study

The study aimed to assess how well ChatGPT could respond to common patient questions about UTUC. A total of 15 inquiries were categorized into four main areas: general information, symptoms and diagnosis, treatment, and prognosis. Urologists evaluated the AI’s responses based on five criteria, scoring them on a scale from 1 to 5.

📈 Results

Out of 16 questionnaires evaluated, ChatGPT received a score of 5 on 336 occasions (28.0%), while the average overall score was 3.93. The responses varied by category, with general information scoring the highest at 4.14 and treatment information scoring the lowest at 3.68. Notably, safety concerns were raised, as some responses were deemed unsafe by urologists.

🌍 Impact and Implications

The findings highlight the potential of AI tools like ChatGPT to assist in patient education, particularly in providing basic information about UTUC. However, the study underscores the necessity for caution, especially regarding treatment-related inquiries. As AI continues to evolve, ensuring the accuracy and safety of its outputs will be paramount for its integration into patient care.

🔮 Conclusion

While ChatGPT shows promise in delivering general information about upper tract urothelial carcinoma, its limitations in treatment guidance and safety must be addressed. This study serves as a reminder of the importance of human oversight in AI applications in healthcare. Further research is needed to enhance the reliability of AI-generated patient information.

💬 Your comments

What are your thoughts on the use of AI in providing patient information? Do you believe tools like ChatGPT can be safely integrated into healthcare? 💬 Share your insights in the comments below or connect with us on social media:

Performance of ChatGPT in providing patient information about upper tract urothelial carcinoma.

Abstract

INTRODUCTION: The aim was to evaluate ChatGPT generated responses to patient-important questions regarding upper tract urothelial carcinoma (UTUC).
MATERIAL AND METHODS: Fifteen common inquiries asked by patients regarding UTUC were assigned to 4 categories: general information; symptoms and diagnosis; treatment; and prognosis. These questions were entered into ChatGPT and its responses were recorded. In every answer 5 criteria (adequate length, comprehensible language, precision in addressing the question, compliance with European Association of Urology guidelines and safety of the response for the patient) were assessed by the urologists using a numerical scale of 1-5 (a score of 5 being the best).
RESULTS: Sixteen questionnaires were included. A score of five was assigned 336 times (28.0%); 4 – 527 times, (43.9%); 3 – 268 times (22.3%); 2 – 53 ti- mes (4.4%); and 1 – 16 times (1.3%). The average overall score was 3.93. Responses to each question received average scores within the range 3.34-4.18. Answers regarding “general information” were graded the highest – mean score 4.14. Artificial intelligence scored the lowest in the “treatment” category – mean score 3.68. A mean score of 4.02 was given for the safety of the response. However, a few urologists considered several answers as unsafe for the patient, by grading them 1 or 2 in this criterion.
CONCLUSIONS: ChatGPT does not provide fully adequate information on UTUC, and inquiries regarding treatment can be misleading for the patients. In particular cases, patients might receive potentially unsafe answers. However, ChatGPT can be used with caution to provide basic information regarding epidemiology and risk factors of UTUC.

Author: [‘Łaszkiewicz J’, ‘Krajewski W’, ‘Tomczak W’, ‘Chorbińska J’, ‘Nowak Ł’, ‘Chełmoński A’, ‘Krajewski P’, ‘Sójka A’, ‘Małkiewicz B’, ‘Szydełko T’]

Journal: Contemp Oncol (Pozn)

Citation: Łaszkiewicz J, et al. Performance of ChatGPT in providing patient information about upper tract urothelial carcinoma. Performance of ChatGPT in providing patient information about upper tract urothelial carcinoma. 2024; 28:172-181. doi: 10.5114/wo.2024.141567

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.