ACR Convergence 2025| Video: Rheuminations on Milestones & Ageism

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Lupus Nephritis
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • Technology
      • Information Technology
      • Apps
    • QA/QI
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
      • Education & Training
    • Certification
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

The Pros & Cons of AI

Thomas R. Collins  |  November 5, 2025

CHICAGO—Artificial intelligence (AI) is growing more and more sophisticated for use in rheumatology and the medical field, offering clinicians a staggering number of options for how to potentially deploy it. From aiding diagnosis to prior authorizations to clinical trial recruitment, AI can help common, simple tasks be completed quickly, ease the cognitive demand of more challenging work and perform tasks that would be impossible if not for AI, said Jeff Curtis, MD, endowed professor of medicine at the University of Alabama at Birmingham, in a session at ACR Convergence 2025.

In a counterargument during the session, Jinoos Yazdany, MD, MPH, endowed professor of medicine at the University of California, San Francisco, warned that AI, although helpful, is rife with issues of increasing bias in the practice of medicine, meager regulatory oversight and a lack of transparency.

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

How AI Is Being Used in Medicine

Jeffrey Curtis, MD, MS, MPH

Jeffrey Curtis, MD, MS, MPH

OpenEvidence, an AI-enabled medical search platform, is one of the most used AI tools, and typically performs quite well, Dr. Curtis said.

“You can just ask [about] a patient’s scenario, and it gives you both clinical reasoning and literature search,” he said. “It rarely hallucinates what that literature is,” unlike other large language models. However, it still does occasionally hallucinate both the prompt and the response, he added.

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

“AI is not necessarily artificial intelligence as much as augmented intelligence—it reminds you of things that you probably already knew and would have eventually thought of but can be very helpful as your thought partner,” Dr. Curtis said.

AI tools can help process prior authorizations. But this kind of tool is best integrated with the electronic medical records so that information, such as ICD10 codes, indications and treatment history, can be extracted.

He warned that “of course insurance companies are using it AI against us, and they have much deeper pockets.” He noted a class-action suit against Cigna for its use of AI for quick denials, but he held out hope for a “more rational AI system that has fair balance to reflect all stakeholder views.” Thus, suggesting that the fear of big legal settlements may motivate insurance companies to come to the table.

AI can also be used to help put together clinical trials, evaluating eligibility criteria, defining cohorts, designing studies and recruiting patients.

Physicians have also found AI helpful for drafting reponses to portal messages from patients, Dr. Curtis said. One randomized trial actually found that an AI tool did not save time on this task due to the time needed to review the AI responses. However, physicians still tended to like the tool, in part because—at the end of a long day—it can be hard to be empathetic when responding to patient queries, and AI can help draft suitable responses, Dr. Curtis said.1

AI is even being used to create patient phenotype clusters derived from data in the electronic medical records, enabling insights into prognosis and management.2

“There’s lots of hype with AI,” Dr. Curtis said, “but, used judiciously, there’s huge potential for us in medicine—and in rheumatology, specifically.”

Concerns About AI

YazdanyPhoto

Jinoos Yazdany, MD, MPH

Dr. Yazdany said she sees parallels between AI and other aspects of medicine when adoption has been rushed before adequate evidence was established.

“I will remind you about opioids. They were also introduced as a highly effective treatment for acute pain, they were highly marketable,” she said. “And as it turned out, they were highly addictive, and they flooded our clinics before we fully understood their long-term impacts or how corporate interests would exploit them.”

She pointed to an algorithm that was used to identify patients with higher levels of medical need so that they could receive more resources to meet those needs. The algorithm underprioritized patients who are Black because it was using healthcare spending as a proxy for need. Because patients who are Black tend to have less access to care, there was less spending for them, and the algorithm interpreted this factor as less need.3

“It’s not just a tool, like a stethoscope,” Dr. Yazdany said. “It’s a force amplifier. It’s taking whatever we feed it, our data—whether that’s good or bad—our assumptions, our biases—and it’s magnifying them.”

An AI model applied to chest X-ray datasets systematically underdiagnosed people who were young, who were women and who were Black or Hispanic because the dataset did not adequately represent these groups, Dr. Yazdany said.4

Additionally, reliance on AI can cause critical thinking skills to erode. One study found that AI improved detection of polyps during colonoscopies. It also found that, when AI was unavailable, clinicians detected polyps at a lower rate than they had before AI was introduced.5

“It was as if their own skill had dulled because they were used to AI doing the heavy lifting for them,” Dr. Yazdany said.

Another study looking at how AI is used for electronic prescribing found that sometimes correct alerts were produced, sometimes wrong alerts went off, and sometimes no alerts went off. The good alerts cut medication errors by a substantial margin, but incorrect alerts increased errors by an even wider margin.

A Lack of Regulation

Another concern, Dr. Yazdany said, is that AI tools are typically not federally regulated, because the companies that create them say they are not intended to be used for direct clinical decision making.

“This is giving tech companies free reign to deploy unregulated software in our clinics, tools that do impact us and our patients,” she said.

She said that one of her patients, who had severe lupus nephritis, was taking mycophenolate mofetil and tacrolimus. She started having serious side effects from the tacrolimus, so Dr. Yazdany told the patient to stop taking it and switch to an available alternative. But the AI-powered scribe that automatically produces notes from the clinical encounter misinterpreted what was said. It documented that tacrolimus was to be continued.

“What if she had read the note?” Dr. Yazdany said. “If something bad had happened, the liability would be entirely on me. The AI-scribe company says their device is not for medical decision making. They’re entirely shielded. That should make every one of us pause.”

Both Dr. Curtis and Dr. Yazdany called for guardrails to ensure that AI is improving healthcare rather than hurting it. Before clinicians put AI to use, they should review the evidence supporting it, Dr. Yazdany said. “Review the data just like you would any clinical trial.”


Thomas R. Collins is a freelance medical writer based in Florida.

References

  1. Tai-Seale M, Baxter SL, Vaida F, et al. AI-generated draft replies integrated into health records and physicians’ electronic communication. JAMA Netw Open. 2024 Apr 1;7(4):e246565.
  2. Kalweit M, Burden AM, Boedecker J, et al. Patient groups in rheumatoid arthritis identified by deep learning respond differently to biologic or targeted synthetic DMARDs. PLoS Comput Biol. 2023 Jun 2;19(6):e1011073.
  3. Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447–453.
  4. Seyyed-Kalantari L, Zhang H, McDermott MBA, et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med. 2021 Dec;27(12):2176–2182.
  5. Budzyń K, Romańczyk M, Kitala D, et al. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: A multicentre, observational study. Lancet Gastroenterol Hepatol. 2025 Oct;10(10):896–903.

 

Share: 

Filed under:ACR ConvergenceMeeting ReportsTechnology Tagged with:ACR Convergence 2025AIartificial intelligenceclinical decision making

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences