Video: Every Case Tells a Story| Webinar: ACR/CHEST ILD Guidelines in Practice

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • QA/QI
    • Technology
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

Ethics Forum: The Current Landscape of Artificial Intelligence in Medicine

Jeanne Gosselin, MD  |  Issue: May 2024  |  May 6, 2024

As we enter a new era of AI, regulatory frameworks governing its use must contend with systems that are more powerful and more opaque than ever. The machine learning algorithms regulated by the FDA are trained on labeled data to perform a specific task via a supervised learning approach, whereas foundation models are trained via self-supervised learning on unlabeled data.11 In the case of clinical language models (CLaMs) and foundation models for electronic medical records (FEMRs), those unlabeled data are biomedical text and patient medical history from the EHR, respectively.12 Hundreds of billions of parameters may be fed into the model, but there is no labeled truth as there is in traditional machine learning—no specific task to learn up front.12

Training of clinical models requires massive datasets of patient information, shared among institutions and globally, presenting new challenges for data protection and appropriate use.13 In the case of most FEMRs, model weights—a parameter of neural networks that helps determine flow of information—are not widely available to the research community, necessitating retraining of models on new EMR data to validate performance.12

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

Bommasani et al. point out that accessibility is threatened by the massive scale of foundation models and the vast resources required to interrogate them, resources not often available to academic institutions, and by the concentration of foundation model development with a handful of large players in big tech.14

Assessing performance, algorithmic bias and reliability, and ensuring privacy and safety, along with a slew of other important metrics, will become increasingly difficult in the era of foundation models.

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

As of October 2023, the FDA had not approved any device using generative AI or foundation models, including LLMs. The regulatory landscape is well outside the scope of this article, but it is fair to say that innovation in AI is outpacing oversight and regulation, both nationally and globally.

There are ethical considerations and potential concerns at every juncture in the application of AI in medicine—some that are already well defined, as exemplified by our current ML-enabled devices and others that are only now emerging; and still more to be discovered in the rapidly changing landscape of generative AI.

In this article, we examine just a few of the many ethical considerations surrounding the current and projected use of AI in medicine, focusing on the bioethical principle of autonomy. Further discussion of ethics in medical AI is warranted, with an exploration of justice, including bias, fairness and equity, of great importance.

Page: 1 2 3 4 5 6 | Single Page
Share: 

Filed under:EthicsPractice SupportTechnology Tagged with:artificial intelligenceDiagnosisEthics Forum

Related Articles

    Healthcare Providers Must Get Compliant with HIPAA Privacy Practices

    August 1, 2013

    Failure to have an updated Notice of Privacy Practices by September 23, 2013 could result in fines and penalties

    Bharat Kumar, MD

    Exploring the Role of Artificial Intelligence in Rheumatology

    November 4, 2022

    I looked at the joints. They spoke back to me—”I need more humanism,” they whispered. To longtime readers, those two sentences may sound both familiar and alien, perhaps even a little humorous. That’s because those sentences were generated entirely by a computer using artificial intelligence (AI). It was simple, too: I just copied the text…

    Artificial Intelligence in Medicine: The Future Is Now

    August 26, 2020

    Advancements in technology and artificial intelligence designed to aid rheumatologists in diagnosing patients and predicting mortality risk were discussed in depth during a session of the European e-Congress of Rheumatology.

    Adobe Stock / ART STOCK CREATIVE

    Large Language Models in Medicine: The potential to reduce workloads, leverage the EMR for better communication & more

    May 17, 2023

    Large language models are a type of AI that allows users to generate new content, drawing from a huge dataset to learn how to mimic “natural language” with many possible beneficial applications for this technology in medicine.

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences