Video: Every Case Tells a Story| Webinar: ACR/CHEST ILD Guidelines in Practice

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • QA/QI
    • Technology
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

Physicians vs. Artificial Intelligence

Arthritis & Rheumatology  |  February 28, 2024

Background & Objectives

The potential applications of artificial intelligence (AI), including the intersection of machine learning, deep learning and natural language processing, form a rapidly growing field in medicine and healthcare. The impact of AI, specifically large language models (LLMs) and generative AI, on patient care is immeasurable at this time, but may be vast. ChatGPT4, a large language model AI solution, is capable of text generation, language translation, text summarization, question answering, chatbot and automated content generations. Patient education from rheumatologists plays a significant role in self-management of rheumatic diseases. But when patients have questions, can AI generate accurate, comprehensive answers?

Ye et al. conducted a single-center, cross-sectional survey of rheumatology patients and physicians in Edmonton, Canada, to explore that question.  The researchers assessed the quality of AI responses to patient-generated rheumatology questions by having participants rate a series of responses, some of which were generated by LLM chatbot and some of which were written by physicians.

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

Methods

Patient questions and physician-generated answers were extracted from the Alberta Rheumatology website, which houses resources for rheumatology patients. In this study, Ye et al. tried to match length of AI and physician-generated responses. Participants completed a one-time questionnaire evaluating typed responses to these real rheumatology patient questions, rating each one’s comprehensiveness and readability. Physician participants also evaluated the accuracy of responses, using a scale of 1–10 (1 being poor, 10 being excellent).

To minimize potential bias from pre-existing attitudes toward AI/chatbots, participants (patients and physicians) were not only blinded to the source of each answer, but they were also initially blinded to the study objective, and recruitment materials did not mention the use of AI. Only after evaluating each set of questions and answers were participants told that one answer was generated by AI.

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

Results

Patients rated no significant difference between AI and physician-generated responses in comprehensiveness or readability. However, rheumatologists rated AI responses significantly poorer than physician responses on comprehensiveness, readability and accuracy. After learning that one answer for each question was AI-generated, physicians were able to correctly identify AI-generated answers at a higher proportion than patients.

Conclusion

Rheumatology patients rated AI-generated responses to patient questions similarly to physician-generated responses in terms of comprehensiveness, readability and overall preference. However, rheumatologists rated AI responses significantly poorer than physician responses, suggesting that LLM-chatbot responses are of overall poorer quality than physician responses, a difference of which patients may not be aware.

Page: 1 2 | Single Page
Share: 

Filed under:Practice SupportTechnology Tagged with:AIArthritis & Rheumatologyartificial intelligencepatient careResearch

Related Articles
    garagestock/shutterstock.com

    E-Health, Telemedicine Pose Challenges, Offer Benefits for Patients with Arthritis

    August 11, 2016

    A 52-year-old woman comes to the office complaining of a two-month history of pain and swelling in the small joints of her hands, feet and knees. She says, “Doctor, I’ve been searching the Internet, and I think I have rheumatoid arthritis. I have some questions for you.” The healthcare system in the U.S. is changing…

    Reading Rheum

    October 1, 2009

    Handpicked Reviews of Contemporary Literature

    Artificial Intelligence in Medicine: The Future Is Now

    August 26, 2020

    Advancements in technology and artificial intelligence designed to aid rheumatologists in diagnosing patients and predicting mortality risk were discussed in depth during a session of the European e-Congress of Rheumatology.

    How Rheumatologists Can Boost Patient Understanding of Educational Materials

    April 2, 2014

    Choosing appropriate, pre-written disease fact sheets, or writing your own educational documents can help patients with low health literacy comprehend information about their condition

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences