The ACR has submitted comments to the National Association of Insurance Commissioners (NAIC) in response to its request for input on a proposed state model law addressing the use of artificial intelligence (AI) in the insurance industry. With AI playing an increasingly prominent role in health insurance decisions, the ACR is calling for stronger consumer protections, greater transparency and equity-focused oversight to ensure that AI does not jeopardize patient access to appropriate, evidence-based care.
Key Recommendations from the ACR
The ACR outlined several specific recommendations for what should be included in the model law:
Mandatory transparency & explainability. Insurers should disclose when and how AI is used in decisions impacting patients. AI-driven decisions—such as coverage denials or premium increases—must be explainable and communicated in understandable terms to patients and providers.
Human oversight & appeals. All AI-generated decisions affecting patient care must be subject to human review. The ACR emphasized the importance of giving patients and providers a clear and fair path to challenge AI-based decisions.
Bias mitigation & equity audits. AI models must be regularly tested for discriminatory outcomes, particularly those that disproportionately impact underserved or vulnerable populations. The ACR called for equity audits that go beyond technical bias detection and include broader social impacts.
Digital literacy & consumer education. The ACR urged the NAIC to include consumer education initiatives in its regulatory framework, noting that most patients and many providers are unaware of how AI affects their care and what rights they have.
Uniform protections regardless of insurer size. While acknowledging that implementation may vary by company size, the ACR insisted that all patients deserve the same level of protection. Scalable implementation strategies, such as phased timelines or tiered reporting, could help smaller insurers comply without creating unequal standards.
Inclusion of third-party vendors. Recognizing that many insurers outsource their AI tools, the ACR strongly recommended that third-party vendors be regulated under the same Model Law. Excluding them, the ACR warned, would create loopholes that weaken oversight and diminish consumer protections.
A Rapidly Evolving Risk Landscape
The ACR also noted that existing U.S. laws and regulations are insufficient to govern the complex and evolving risks posed by AI in insurance. Although such frameworks as the Civil Rights Act and HIPAA prohibit overt discrimination, they are less equipped to address proxy discrimination, which can occur when AI systems use non-protected data, such as ZIP codes or occupation, that correlate with protected characteristics.
International efforts, such as the EU’s Artificial Intelligence Act, have begun to address AI in healthcare through risk-based classifications, real-world testing and transparency mandates. In the U.S., however, AI regulation in healthcare remains fragmented and limited in scope.
Without clear guardrails, AI can reinforce systemic inequalities and create new forms of harm. The ACR will work with policymakers and stakeholders to ensure these powerful tools are used responsibly and ethically, especially when they directly impact patient care.
Looking Ahead
The ACR commended the NAIC for taking proactive steps to address the role of AI in insurance and expressed interest in ongoing collaboration to ensure that rheumatology patients are protected as these technologies continue to evolve. As a result, the ACR has been invited to the NAIC working group meetings. We look forward to providing ongoing feedback as the NAIC works on AI policy solutions and guardrails.
Joseph Cantrell, JD, is the director of state affairs and community relations for the ACR.