AI in Healthcare 2025: Navigating New Frontiers in Innovation and Regulation

2024 was a pivotal year for AI in healthcare. Venture investment skyrocketed, partnerships between leading AI startups and health systems took shape, and policymakers across the globe grappled with AI’s implications for consumer safety and algorithmic transparency. Emerging AI technologies drove improvements in clinical outcomes and administrative efficiencies, while developers worked on applications to expand Generative AI in clinical settings, integrate AI to drive value-based care, and incorporate AI into digital therapeutics for mental health and chronic conditions.

This year, we expect AI to play an increasingly critical role in value-based care. The rapid growth in investment and adoption of AI in healthcare in 2024 brings a new wave of legal and regulatory developments. As we kick off 2025, those deploying AI in healthcare must be cognizant of new compliance requirements under federal and state law.

The Big Picture: An Evolving Regulatory Landscape

Federal Actions on AI in Healthcare

Prompted by Executive Order 14110, which established a government-wide effort to guide responsible use of AI development and deployment, federal agencies took action. The Department of Health and Human Services (HHS), through various sub-agencies, published several novel AI rules aimed at ensuring patient safety. For example, the Food and Drug Administration (FDA) issued final Predetermined Change Control Plan (PCCP) guidance and proposed Marketing Submission Guidelines for AI technologies.

Adding to the mix, the Office of the National Coordinator (ONC) and the Office of Civil Rights (OCR) published industry-specific regulations, including the HTI-1 Final Rule  and the Section 1557 Final Rule , respectively. Meanwhile, the Federal Trade Commission (FTC) created an agency-wide compliance effort called Operation AI Comply and issued numerous enforcement actions to enhance consumer protection for AI services. Each of the aforementioned final rules and policy decisions will be discussed in the second section below.

State-Level Activity Regulating AI

States also took action. California and Utah opted to directly regulate the use of AI in the healthcare industry, while Colorado opted to regulate high-risk AI systems through the Colorado AI Act. Nearly every state proposed legislation relating to the use of AI in high-risk domains (like healthcare and employment decisions). Organizations should expect another high-volume year of state regulatory efforts as the effects of a new presidential administration remain to be seen on the federal level. 

Want to stay up to date on the latest changes in statewide regulatory changes? Subscribe to our Telehealth and Virtual Care newsletter on LinkedIn!

International Frameworks for AI

In the international arena, the European Union (EU) adopted the EU AI Act, a risk-based framework that imposes disclosure and governance obligations on developers of AI systems. The EU AI Act applies to U.S. companies delivering services to EU citizens under its extraterritoriality provisions.

Zooming In: Federal Healthcare Developments

Through the ONC and OCR, HHS finalized two regulatory frameworks to ensure algorithmic transparency and combat algorithmic discrimination.

The ONC adopted the HTI-1 Final Rule, which requires Certified Electronic Health Record Technology (CEHRT) developers that supply certain AI technologies (specifically, “decision support intervention (DSI)” technologies) to implement risk management and disclosure protocols. Additionally, OCR finalized the Section 1557 Final Rule, which prohibits discriminatory practices by AI tools based on race, color, national origin, sex, age, or disability in specified health programs or activities.

The FDA also published guidance relating to Pre-determined Change Control Plans (PCCP) for regulated devices, describing how device manufacturers inform the FDA of changes to AI solutions and how the modifications will be assessed. The FDA also published a Draft Guidance regarding marketing submission guidance for AI technologies. The Draft Guidance comes equipped with best practices for submissions, data stewardship, model performance and training, and a template model card.

Zooming In: State Legislative Activity

Kicking things off, Utah passed the Utah AI Policy Act, which, effective May 1, 2024, requires deployers of AI systems in regulated professions to disclose to consumers that they are interacting with an AI system and a mechanism to opt-out.

Shortly thereafter, Colorado passed the Colorado AI Act, which, effective January 1, 2026, imposes comprehensive governance and disclosure obligations on developers of high-risk AI systems, which would include many AI systems deployed in healthcare settings.

Towards the end of 2024, California passed numerous bills relating specifically to generative AI. Two are noteworthy from a healthcare perspective. Effective January 1, 2025, AB 3030 requires healthcare providers to provide a disclaimer and clear instructions describing how a patient may contact a human healthcare provider when using generative AI to communicate “patient clinical information.” Also effective January 1, 2025, SB 1120 (known as the “Physicians Make Decisions Act”) places limits on how health insurance companies can use AI to review and deny claims.

Key Concepts for 2025: A Roadmap for Healthcare AI Developers and Deployers

Strengthened AI Transparency Requirements

What’s driving it? Federal and state regulatory requirements, public policy, and consumer demand.

What can healthcare AI developers and deployers do today?

  • Review your Terms of Use and Privacy Policy and explore drafting a model card to ensure that your model is transparent in both B2B and B2C contexts.

  • Refine your patient or user journey to ensure compliance with state and federal law (e.g., legally sufficient disclaimers and opt-out mechanisms).

  • Conduct bias and fairness audits to mitigate nondiscrimination risk from a litigation and regulatory enforcement perspective.

State-level Regulatory Activism Accelerates

What’s driving it? States are leading the way from a regulatory perspective as the federal government grapples with industry-specific rulemaking (i.e., state bureaucracies move faster than the federal government).

What can healthcare AI developers and deployers do today?

  • Companies must establish an AI governance program to ensure that they are able to comply with an evolving state law landscape and decide whether to build to the most stringent state standard when scaling nationally.

Product Claims Must Match Underlying Capabilities

What’s driving it? Federal regulators continue to issue guidance and press releases directly citing potentially fraudulent claims (FTC) and the need for monitoring model performance (FDA). Industry groups representing patients, providers, and developers stress the potential for misleading claims in health AI as a major risk.

What can healthcare AI developers and deployers do today?

  • Ensure that public claims are supported by underlying metrics and studies.

  • Revisit or build out your FDA software as a medical device (SaMD) strategy to ensure compliance with medical device frameworks

Develop a Human in the Loop Protocol

What’s driving it? Nearly all federal (e.g., FDA clinical decision support guidance) and state legislation mandates some form of human oversight for AI decisions in high-risk domains, such as healthcare.

What can healthcare AI developers and deployers do today?

  • Ensure that all high-risk decisions, especially clinical decisions, are reviewed by an appropriately licensed provider.

Be Mindful of Data Privacy Implications

What’s driving it? Federal healthcare privacy laws (e.g., HIPAA) and state consumer privacy laws (e.g., California Consumer Privacy Act (CCPA) and Washington’s My Health My Data (MHMD) Act).

What can healthcare AI developers and deployers do today?

  • Ensure that, especially when processing sensitive data or protected health information (PHI), whether for training or otherwise, you comply with federal and state privacy standards, including by updating your privacy policies.

Conclusion       

In a rapidly evolving healthcare landscape regulatory compliance and robust AI governance are no longer just legal obligations—they are critical competitive advantages. Organizations that commit to adhering to federal and state laws while implementing sound AI governance frameworks will be better positioned to navigate regulatory scrutiny (and avoiding costly penalties), enhance patient trust, and foster long-term customer relationships.

Ready to strengthen your compliance strategy and embrace effective AI governance? Contact us today!