AI is even being used to create patient phenotype clusters derived from data in the electronic medical records, enabling insights into prognosis and management.2
“There’s lots of hype with AI,” Dr. Curtis said, “but, used judiciously, there’s huge potential for us in medicine—and in rheumatology, specifically.”
Concerns About AI
Dr. Yazdany said she sees parallels between AI and other aspects of medicine when adoption has been rushed before adequate evidence was established.
“I will remind you about opioids. They were also introduced as a highly effective treatment for acute pain, they were highly marketable,” she said. “And as it turned out, they were highly addictive, and they flooded our clinics before we fully understood their long-term impacts or how corporate interests would exploit them.”
She pointed to an algorithm that was used to identify patients with higher levels of medical need so that they could receive more resources to meet those needs. The algorithm underprioritized patients who are Black because it was using healthcare spending as a proxy for need. Because patients who are Black tend to have less access to care, there was less spending for them, and the algorithm interpreted this factor as less need.3
“It’s not just a tool, like a stethoscope,” Dr. Yazdany said. “It’s a force amplifier. It’s taking whatever we feed it, our data—whether that’s good or bad—our assumptions, our biases—and it’s magnifying them.”
An AI model applied to chest X-ray datasets systematically underdiagnosed people who were young, who were women and who were Black or Hispanic because the dataset did not adequately represent these groups, Dr. Yazdany said.4
Additionally, reliance on AI can cause critical thinking skills to erode. One study found that AI improved detection of polyps during colonoscopies. It also found that, when AI was unavailable, clinicians detected polyps at a lower rate than they had before AI was introduced.5
“It was as if their own skill had dulled because they were used to AI doing the heavy lifting for them,” Dr. Yazdany said.
Another study looking at how AI is used for electronic prescribing found that sometimes correct alerts were produced, sometimes wrong alerts went off, and sometimes no alerts went off. The good alerts cut medication errors by a substantial margin, but incorrect alerts increased errors by an even wider margin.




