Video: Knock on Wood| Webinar: ACR/CHEST ILD Guidelines in Practice
fa-facebookfa-linkedinfa-youtube-playfa-rss

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Lupus Nephritis
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • QA/QI
    • Technology
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

Ethics Forum: Regarding Chatbots in Rheumatology

Biana Modilevsky, DO, & Kabita Nanda, MD  |  Issue: June 2024  |  June 10, 2024

Chatbots are not a new concept, but have recently gained popularity and traction. Launched in late 2022, ChatGPT (Chat Generative Pre-Trained Transformer) is a web-based platform designed to simulate interactive conversations and deliver real-time data. It has quickly become a tool that provides instantaneous information that can be more focused than a Google search.1 We, among many of our peers, became rapidly amused and excited at the prospect of using a new digital assistant to optimize our workflow.

A problem not unique to anyone in rheumatology is the constant effort of creating prior authorization requests, and later, tackling the appeal process. In fact, after weeks of fighting with an insurance company for an off-label use of an *insert expensive biologic medication name here* for an *insert rare rheumatologic condition here*, the temptation to use this new robotic assistant to draft an appeal letter became incredibly enticing, despite templates being institutionally available. However, we had second thoughts before entering the prompt, especially thoughts of an ethical dilemma.

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

Ethical Quandary

Suddenly, new questions arose: How specific could we get? Would the data be stored? Would this be considered a breach of patient autonomy and privacy? By feeding this chatbot information about an individual’s rare diagnosis, are we inadvertently compromising confidentiality and disclosing protected health information (PHI)? To what extent can we safely expose pertinent pieces of the puzzle without unintentionally revealing PHI?

Within the scope of the 18 PHI identifiers, the more obvious ones include name, address, birthdate, medical record number, Social Security number, as well as demographic data (e.g., age, gender, race) that relate to “an individual’s past, present or future physical or mental health or condition.”2,3 Some less intuitive identifiers, however, include geographic information smaller than a state of residence, admission/discharge dates and any unique identifying characteristics, such as comorbidities and previous treatments tried and failed.3,4 The latter are relevant to said letter when trying to customize it while also maximizing efficiency.

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) implemented a Privacy Rule that aims to protect health information and patient confidentiality while still permitting its use to uphold high value care.2 Clinicians are required to have training on HIPAA, and we now need to include more details on how to avoid the pitfalls of privacy breaches when using artificial intelligence (AI) enabled tools.

Even with the best of intentions, unintentional HIPAA violations occur on a regular basis. The information fed into ChatGPT is not confidential and is submitted to, and stored in, the servers of the company that owns it, OpenAI, which is not a protected health privacy network.5 This violation could subject one to legal trouble, so be forewarned.4

A data breach must typically be reported to an enforcement agency within the U.S. Department of Human Health Services (HHS), with each affected patient case leading to an individualized and costly investigation (in some cases up to $50,000).4,5 This would also require disclosure of the breach to the affected party and the public.5 One saving grace is that OpenAI does not always use or view the information, and it has procedures to delete accounts and information within 30 days.5

In our humble opinion, it just doesn’t seem worth it. We probably would end up spending more time thinking of appropriate verbiage to stay within compliance, as well as rephrasing and editing that draft, than we would if we just started from scratch in the first place.

Insurance Companies Test the Boundaries

It doesn’t seem that insurance companies are quibbling with their collective conscience. They have been using AI software to cut costs by unapologetically submitting broad denials.

If you feel personally victimized, you’re not alone, and you genuinely may have been. Class action lawsuits were filed against United Healthcare and Cigna in 2023 for automatically denying and overriding certain physician recommendations using flawed AI algorithms and without ever actually opening or reviewing the documents.6 In fact, their error rate was in excess of 90%, with a further investigation revealing “that over a period of two months a Cigna doctor can deny 300,000 requests for payment and only spend an average of 1.2 seconds per case.”7,8

After a dozen years of education and training—not to mention the time we put into caring for each individual patient and documenting each cerebral thought—we can then get fraudulently told “no” by a robot (not a peer) in under two seconds? Not only is our time and professional input being ignored and undervalued, but our patients are also experiencing potentially serious delays to appropriate treatment. This is unethical. The current strategy of cutting corners by insurance companies is not new, and the misappropriation of AI may continue unless we shed light on this unethical practice and advocate for our patients and ourselves.

In Sum

We can learn from the insurance companies’ mistakes in using this new platform to improve work efficiency, but we need to be mindful and educated in how to use AI safely and fairly. Could we minimize the input provided while maximizing its utility? Could we double check that no components of HIPAA are being disclosed, and then have at it? Or do we need to wait until the software is integrated into our electronic health systems and let the worry of committing a data breach float away from our subconscious?

The bottom line: ChatGPT can be your assistant, but not a trustworthy enough one to keep a secret. So make sure specific personal information is withheld and HIPAA security is maintained.


Biana Modilevsky, DO, is a rheumatology fellow at the University of Arizona Arthritis Center, Tucson.

Kabita Nanda, MD, is an associate professor of pediatrics at Seattle Children’s Hospital and University of Washington School of Medicine.

References

  1. Marr B. A short history of ChatGPT: How we got to where we are today. Forbes. 2023 May 19. https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/?sh=311bcfc0674f.
  2. Summary of the HIPAA privacy rule. U.S. Department of Health and Human Services. 2022 Oct 19. https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html.
  3. Loyola University Chicago. The 18 HIPAA identifiers. Information Technology Services. 2024. https://www.luc.edu/its/aboutus/itspoliciesguidelines/hipaainformation/the18hipaaidentifiers/.
  4. What are the penalties for HIPAA violations? The HIPAA Journal. https://www.hipaajournal.com/what-are-the-penalties-for-hipaa-violations-7096/#whatconstitutesahipaaviolation.
  5. Kanter GP, Packel EA. Health care privacy risks of AI chatbots. JAMA. 2023 Jul 25;330(4):311–312.
  6. Lopez I. UnitedHealthcare accused of AI use to wrongfully deny claims (1). Bloomberg Law. 2023 Nov 14. https://news.bloomberglaw.com/health-law-and-business/unitedhealthcareaccused-of-using-ai-to-wrongfully-deny-claims.
  7. CASE 0:23-cv-03514. United States District Court District of Minnesota. 2023 Nov 14. https://aboutblaw.com/bbs8.
  8. Rucker P. How Cigna saves millions by having its doctors reject claims without reading them. Propublica. 2023 Mar 25. https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims.

Page: 1 2 3 | Multi-Page
Share: 

Filed under:EthicsInsuranceTechnology Tagged with:artificial intelligencechatbotsEthics ForumHIPAA

Related Articles

    Legal Updates: Healthcare Data Privacy and Security under HIPAA

    May 1, 2014

    Maintaining the privacy of healthcare data Is paramount, and a breach can cost you hundreds of thousands of dollars

    Adobe Stock / ART STOCK CREATIVE

    Large Language Models in Medicine: The potential to reduce workloads, leverage the EMR for better communication & more

    May 17, 2023

    Large language models are a type of AI that allows users to generate new content, drawing from a huge dataset to learn how to mimic “natural language” with many possible beneficial applications for this technology in medicine.

    Healthcare Providers Must Get Compliant with HIPAA Privacy Practices

    August 1, 2013

    Failure to have an updated Notice of Privacy Practices by September 23, 2013 could result in fines and penalties

    Email & Text in the World of HIPAA

    May 17, 2019

    fizkes / shutterstock.com The world we live in necessitates infor­mation be communicated in a quick and easy manner. This remains true in the healthcare setting. The ability to text or email staff and patients has become a priority for many healthcare entities. However, maintaining patient privacy and confidentiality is essential to ensure we meet compliance…

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
fa-facebookfa-linkedinfa-youtube-playfa-rss
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences