Video: Knock on Wood| Webinar: ACR/CHEST ILD Guidelines in Practice
fa-facebookfa-linkedinfa-youtube-playfa-rss

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Lupus Nephritis
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • QA/QI
    • Technology
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

Ethics Forum: Regarding Chatbots in Rheumatology

Biana Modilevsky, DO, & Kabita Nanda, MD  |  Issue: June 2024  |  June 10, 2024

A data breach must typically be reported to an enforcement agency within the U.S. Department of Human Health Services (HHS), with each affected patient case leading to an individualized and costly investigation (in some cases up to $50,000).4,5 This would also require disclosure of the breach to the affected party and the public.5 One saving grace is that OpenAI does not always use or view the information, and it has procedures to delete accounts and information within 30 days.5

In our humble opinion, it just doesn’t seem worth it. We probably would end up spending more time thinking of appropriate verbiage to stay within compliance, as well as rephrasing and editing that draft, than we would if we just started from scratch in the first place.

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

Insurance Companies Test the Boundaries

It doesn’t seem that insurance companies are quibbling with their collective conscience. They have been using AI software to cut costs by unapologetically submitting broad denials.

If you feel personally victimized, you’re not alone, and you genuinely may have been. Class action lawsuits were filed against United Healthcare and Cigna in 2023 for automatically denying and overriding certain physician recommendations using flawed AI algorithms and without ever actually opening or reviewing the documents.6 In fact, their error rate was in excess of 90%, with a further investigation revealing “that over a period of two months a Cigna doctor can deny 300,000 requests for payment and only spend an average of 1.2 seconds per case.”7,8

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

After a dozen years of education and training—not to mention the time we put into caring for each individual patient and documenting each cerebral thought—we can then get fraudulently told “no” by a robot (not a peer) in under two seconds? Not only is our time and professional input being ignored and undervalued, but our patients are also experiencing potentially serious delays to appropriate treatment. This is unethical. The current strategy of cutting corners by insurance companies is not new, and the misappropriation of AI may continue unless we shed light on this unethical practice and advocate for our patients and ourselves.

In Sum

We can learn from the insurance companies’ mistakes in using this new platform to improve work efficiency, but we need to be mindful and educated in how to use AI safely and fairly. Could we minimize the input provided while maximizing its utility? Could we double check that no components of HIPAA are being disclosed, and then have at it? Or do we need to wait until the software is integrated into our electronic health systems and let the worry of committing a data breach float away from our subconscious?

Page: 1 2 3 | Single Page
Share: 

Filed under:EthicsInsuranceTechnology Tagged with:artificial intelligencechatbotsEthics ForumHIPAA

Related Articles

    Legal Updates: Healthcare Data Privacy and Security under HIPAA

    May 1, 2014

    Maintaining the privacy of healthcare data Is paramount, and a breach can cost you hundreds of thousands of dollars

    Adobe Stock / ART STOCK CREATIVE

    Large Language Models in Medicine: The potential to reduce workloads, leverage the EMR for better communication & more

    May 17, 2023

    Large language models are a type of AI that allows users to generate new content, drawing from a huge dataset to learn how to mimic “natural language” with many possible beneficial applications for this technology in medicine.

    Healthcare Providers Must Get Compliant with HIPAA Privacy Practices

    August 1, 2013

    Failure to have an updated Notice of Privacy Practices by September 23, 2013 could result in fines and penalties

    Email & Text in the World of HIPAA

    May 17, 2019

    The world we live in necessitates infor­mation be communicated in a quick and easy manner. This remains true in the healthcare setting. The ability to text or email staff and patients has become a priority for many healthcare entities. However, maintaining patient privacy and confidentiality is essential to ensure we meet compliance standards. Although emailing…

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
fa-facebookfa-linkedinfa-youtube-playfa-rss
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences