Navigating the Future of AI in Healthcare: 3 State Law Trends to Watch

As artificial intelligence continues to revolutionize healthcare, states and leaders in this space are enacting regulations and guidance to help ensure its responsible, safe, and effective use. 

However, as we have seen with the development of state regulations around digital health generally, we are currently witnessing the enactment of a patchwork of laws aimed at regulating AI in healthcare. This creates a confusing environment for digital health innovators aiming to launch their new AI-related offerings on a multijurisdictional basis. 

In an effort to help innovators navigate this evolving regulatory landscape, here, we outline three key state law trends that healthcare AI companies should consider when developing compliance strategies.

Doctor using AI on the computer

1. AI Use and Risks Disclosure to Consumers

Transparency is paramount anytime you’re deploying AI in healthcare. Several states have passed laws requiring companies to inform consumers when they’ll be interacting with AI systems, particularly in healthcare settings:

  • California: Assembly Bill 3030 mandates that healthcare providers disclose to patients when generative AI is used in communications related to clinical information. Effective January 1, 2025, providers must include a disclaimer and offer clear instructions for patients to contact a human healthcare provider regarding the AI-generated message. 

  • Colorado: Starting February 1, 2026, deployers of AI systems intended for consumer interaction must disclose to each consumer that they are interacting with an AI system. Additionally, deployers of high-risk AI systems must implement a risk management policy to govern their deployment.

  • Utah: Legislation requires that individuals using generative AI in interactions disclose to consumers that they are engaging with AI rather than a human. This law applies to various sectors, including healthcare, to ensure transparency in AI-driven communications.

2. Providing Consumers the Right to Opt Out

Empowering consumers with control over their data is a growing focus that has led to legislation in Colorado and Delaware:

In Colorado, consumers must be informed of their right to opt out of the processing of personal data, including decisions made by AI systems.

In Delaware, consumers have the right to opt out of the processing of personal data for purposes such as targeted advertising, sale of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects.

3. Protecting Consumers Against Algorithmic Discrimination

A strong AI governance framework should include efforts to ensure the technology is operating without bias, especially in healthcare settings. 

Colorado and New York have both implemented state-level requirements around this issue, and more states may follow suit in 2025.

The Colorado Artificial Intelligence Act requires deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination, defined as unlawful differential treatment or impact based on protected characteristics. 

In New York, employers using automated employment decision tools must conduct bias audits to assess disparate impacts on individuals based on protected categories, ensuring fairness in AI-driven hiring processes.

Download our 8-step process for developing an AI governance plan here.

Key Takeaways

As digital health companies develop and integrate AI in their applications, they should plan for deployment with the following principles in mind:

  1. Disclose AI Usage: Clearly inform consumers when AI is used in their care, detailing its application and associated risks.

  2. Obtain Consent and Offer Opt-Out Options: Secure consumer consent for AI involvement and provide mechanisms to opt out of data processing by AI tools.

  3. Conduct Regular Bias Audits: Implement routine evaluations of AI systems to detect and mitigate algorithmic discrimination, ensuring equitable outcomes.

Healthcare AI companies planning to develop or deploy AI tools across multiple jurisdictions must stay informed about the diverse and evolving state laws governing AI use. And in order to do so, it’s essential to work with a legal partner who understands the intersection of digital health and AI. 

To learn more about the work we’re doing at Nixon Law Group to help our clients implement effective AI governance and scale their innovation faster, contact us today.