Nixon Law Group

View Original

How do Healthcare AI Developers (and Buyers) Stay Ahead of the Regulatory Curve?

No centralized framework for the regulation of artificial intelligence in the United States currently exists. That said, the flurry of regulatory policymaking and legislation, congressional hearings and inquiries, and industry stakeholder organization around the development and deployment of healthcare AI portends major developments in the coming years. Experienced entrepreneurs and executives know that the ability to peer into the future can pay dividends, and we agree.

While industry awaits formal legislative and agency action (e.g., new laws and regulations), we can draw inferences about what to expect using our experience navigating other innovative policy rollouts and currently published memoranda and guidance from the White House, federal and state agencies, and broad industry coalitions. We call them the SHARP-ENF principles, because once you’ve implemented them, you’ll be sharp enough to cut through the complexity of today’s AI regulatory environment.

First, let’s start with the three federal agencies that will drive enforcement of modern healthcare AI products and services (the “Big Three”). Then, we’ll tackle the SHARP-ENF principles. Read to the end for an easy-to-use reference that summarizes these principles for use in your digital health business.

Department of Health and Human Services (HHS)

HHS enforces two key regulatory frameworks that apply to healthcare AI: the Health Insurance Portability and Accountability Act (HIPAA) and the new Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule (HTI-1).

HIPAA

What is it?

HIPAA is the pantheon of healthcare data privacy regulation in the U.S. Its three separate Rules (Security, Privacy, and Breach) impose a variety of security controls, documentation, governance, and notice requirements on custodians of protected health information (PHI).

When does it apply?

If a Covered Entity or Business Associate is using PHI in model development (training data for example), deployment, or maintenance of a healthcare AI system, then HIPAA obligations likely apply.

What can I do to comply?

Compliance with HIPAA is an ongoing journey. Developers and deployers of healthcare AI should (i) enter into Business Associate Agreements (BAAs) as required by the HIPAA Rule (ii) ensure that BAAs permit the intended exchange of PHI (iii) develop or maintain adequate security controls to preserve patient data privacy and (iv) report breaches or security incidents involving PHI when required by law. You may want to start with a HIPAA “security risk assessment” and move on to a SOC 2 audit when you’re ready (or a customer requires it).

HTI-1

What is it and when does it apply?

The HTI-1 Final Rule is an AI disclosure and governance rule published by the US Department of Health and Human Services (HHS). The rule applies to developers of Certified Electronic Health Record Technology (CEHRT), particularly those who incorporate predictive decision support interventions (DSIs) into their products. The rule aims to regulate and enhance the transparency, effectiveness, and safety of predictive DSIs used in healthcare settings, ensuring that these technologies are developed and managed responsibly.

HHS DSI Presentation (1/18/2024)

Specifically, the HTI-1 Final Rule establishes algorithm transparency and risk management requirements and introduces the agency’s “FAVES” principles. The "FAVES" principles — Fair, Appropriate, Valid, Effective, and Safe — are important because they set key standards for the development, implementation, and evaluation of predictive decision support interventions (DSIs) in healthcare technology. They’re also important because these principles show up in other federal policy guidance, so you can bet they’ll be crucial to the future of regulatory policy for the agency. Here's why each element is crucial:

  • Fair: Ensures that DSIs do not perpetuate biases or inequities, aiming for equitable outcomes across different demographics.

  • Appropriate: Guarantees that DSIs are suitable for the contexts and populations in which they are used, adhering to relevant clinical and ethical standards.

  • Valid: Confirms that DSIs are scientifically and statistically sound, providing reliable predictions based on the data and methodologies employed.

  • Effective: Ensures that DSIs achieve their intended outcomes, improving clinical decision-making and patient care.

  • Safe: Prevents harm to patients, ensuring that DSIs do not introduce new risks and that existing safeguards are in place to mitigate potential issues.

While this rule only applies to EHR companies who sell CEHRT, third party DSIs that are integrated into or otherwise included as part of these EHR products are subject to the same transparency and risk management requirements as if they were native DSIs. So, practically, ANY COMPANY SELLING PREDICTIVE DECISION SUPPORT TECH TO CEHRT VENDORS needs to understand and comply with the rule.

What can I do to comply?

At a high level, the HTI-1 Final Rule requires:

  1. Clear definitions and guidelines for algorithm-based and model-based predictive DSIs.

  2. User and auditor access to information about the DSIs’ design, development, training, and evaluation processes, including details about training data and fairness evaluations.

  3. Implementation of risk management practices for all predictive DSIs, ensuring their validity, reliability, safety, security, and effectiveness.

  4. Mandatory public disclosure of summary information regarding the risk management practices applied to predictive DSIs.

DSI developers should start by compiling a list of their AI model’s “source attributes”. These are characteristics or properties of the data sources that AI models learn from. For example, they might describe what kind of data (numbers, pictures, text), where the data came from (a website, a database), the quality of the data (is it complete? accurate?), when the data was collected (last week, a year ago), and if there are any privacy concerns (does it include personal info?).

Pro Tip: This will be difficult for those companies using “black box” LLMs that do not disclose source attributes.

Federal Trade Commission (FTC)

The FTC is the country’s consumer watchdog, enforcing antitrust policy and promoting consumer protection. For digital health and health IT companies, the FTC’s authority is applicable in two ways: through (1) regulating consumer-facing statements (e.g., statements made on company web sites, like Terms of Use (TOUs) and Privacy Policies (PPs)) and (2) enforcing the Health Breach Notification Rule (HBNR).

TOUs and PPs

TOUs exist to inform consumers about permitted and prohibited uses of products and services. For a generative AI (GenAI) company, for example, the TOU might describe the respective rights and obligations of the company and its users related to the use and ownership of the company’s LLM. PPs inform users about how their data will be used, disclosed, and maintained by the company.

In a recent blog post, the FTC illustrated how an AI company’s TOUs and PPs might lead to enforcement action. Generative AI companies require immense and continuous amounts of data to train their models. In some cases, generative AI companies obtain consent from consumers to train their models on a consumer’s data by publishing TOUs and PPs. However, as the need for access to consumer data for model training grows, companies may surreptitiously change these policies so that they are no longer restricted in the ways they can use a consumer’s data. The FTC is scrutinizing organizations that change their terms without providing notice to consumers, so be wary of modifying consumer data use without modifying disclosures in your TOUs and PPs. You should precisely and purposefully draft your TOUs and PPs to reflect how data is used, disclosed, and maintained by your company. Consider all potential use cases, and ensure you notify users of changes.

HBNR

Another Acronym? The HBNR applies to vendors of personal health records (PHRs) that are not covered by HIPAA. If you’re a digital health company, and you’re not subject to HIPAA, you may still be subject to the HBNR. HBNR requires that companies notify the FTC of a breach of unsecured personally identifiable health data. To comply with the HBNR, you should carefully track permitted uses and disclosures of PHRs and build a compliant breach response policy.

Food and Drug Administration (FDA)

The FDA regulates medical devices, which may include AI-powered digital health solutions that support diagnosis or treatment of health conditions. These are sometimes called Software as a Medical Device (SaMD) or Clinical Decision Support Software (CDSS). FDA develops and enforces rules for commercializing SaMD and CDSS, so AI companies in the space should always consult a regulatory expert before introducing their product into commerce to ensure they understand whether and how FDA enforcement may apply.

Pro Tip: We are in a moment of transition. FDA’s guidance on SaMD, CDSS, and other mobile applications does not contemplate the advent of generative AI technologies. The latest guidance was published just prior to the launch of ChatGPT in November 2022. While FDA guidance doesn’t specifically address generative AI, it establishes plenty of requirements for software-based products, which creates a basic compliance framework for evaluating AI solutions. Until the dust settles on FDA’s approach to regulating generative AI technologies, innovators developing AI solutions should continue to comply with existing requirements for software products. If you are developing AI solutions and haven’t considered the impact of FDA regulation, let this be your call to action to do so.

Keep Your Eye on The Big Three

The Big Three are forging the foundation for the development and deployment of modern healthcare AI technology. Have patience—these agencies are huge, bureaucratic machines working hard to navigate a fast-changing landscape. We encourage you to engage with them, submit comments and responses to requests for input, and work with an expert to navigate the grey space we’re living in.

From a commercial perspective, buyers are already building governance frameworks using these principles, with the growing expectation that companies selling AI technology will share these principles. Compliance can be a key differentiator and set you up for success as you commercialize your AI tech. The SHARP-ENF Principles are a great starting point.

Staying Ahead of the Curve with SHARP-ENF Principles

The evolution of AI-based legal and regulatory frameworks is happening in real time, and innovators need a simple set of principles to guide the development and deployment of their AI-powered technology. That’s why we created the "SHARP-ENF" (pronounced "sharp enough") principles, a simple framework complemented by critical questions aimed at facilitating self-assessment for digital health innovators. These principles integrate the FAVES described above, as well as other frameworks published by public and private leaders in the space.

Use these principles to begin to develop a governance framework for your AI product or service.

Security

In an era where data breaches are increasingly common, ensuring the security of data within AI tools is paramount. This principle emphasizes the importance of safeguarding all data inputs, storage, and processing from unauthorized access or breaches.

Critical Question for Healthcare Innovators: Have comprehensive security protocols been established, protecting data during training and after model deployment, and has the AI been trained on ethically and compliantly sourced data?

Human Oversight

AI tools, especially those involved in clinical decision-making, must operate under vigilant human supervision. This oversight is crucial in minimizing errors that could compromise patient safety.

Critical Question for Healthcare Innovators: Is there a well-defined protocol ensuring human oversight of AI tools, both during model training and after deployment, particularly those influencing clinical decisions, to uphold safety and accountability?

Accountability

Accountability ensures trust and reliability in high stakes healthcare environments. It is essential to delineate clear responsibility for outcomes suggested by AI tools in both the administrative and clinical context.

Critical Question for Healthcare Innovators: Who holds responsibility for an AI tool's recommendations, and how is accountability integrated into the organization’s risk management framework?

Reliability/Accuracy

AI tools become credible by being reliable and accurate. Accuracy and reliability must be transparently communicated to developers and end users.

Critical Question for Healthcare Innovators: Can we promote reliability and accuracy by transparently articulate the development and training process of our AI tool?

Patient Safety

Safeguarding patient safety by ensuring the clinical and administrative validity of recommendations is non-negotiable.

Critical Question for Healthcare Innovators: How are the AI tool's recommendations validated for accuracy and safety to prevent potential harm to patients?

Effectiveness

Ensuring the effectiveness of an AI tool is fundamental to maintaining consumer trust.

Critical Question for Healthcare Innovators: Have we conducted thorough testing to confirm that the AI tool fulfills its user-facing benefits and functions effectively?

Notice and Transparency

Transparency about the use of AI tools in patient care is essential, requiring clear communication about the methodologies and operations of these technologies.

Critical Question for Healthcare Innovators: How do we ensure that patients and healthcare providers are adequately informed about the AI tools in use and understand the methodologies driving their recommendations?

Fairness and Bias

Diversity in training datasets is critical to minimizing biases in AI tools, ensuring equitable outcomes across various patient demographics.

Critical Question for Healthcare Innovators: Have we undertaken a rigorous assessment to identify and mitigate potential biases, confirming that our AI tool is trained on a diverse dataset for fair and unbiased outcomes?

Adhering to the "SHARP-ENF" principles is more than a compliance checklist; it's a commitment to pioneering responsible and ethical AI innovations in digital health. These principles, paired with introspective questions, provide a structured framework for digital health companies to navigate the intricate landscape of AI deployment. By embedding these values into the core of their operations, companies not only mitigate legal and ethical risks but also build enduring trust with their users, laying the foundation for impactful and sustainable health solutions in the digital age.


To learn more about the complex and evolving regulatory environment surrounding AI in healthcare, check out our recent webinar, "AI Law + Policy Landscape for Digital Health Innovators," available to watch on-demand now.

 You can also get a monthly executive summary of the most important topics for healthcare innovators by signing up for our Innovation Insights newsletter. You’ll get links to recent blog posts, resources, and industry opportunities, delivered straight to your inbox.

Our attorneys work with artificial intelligence companies building the future of healthcare. To learn more about how we can add value to your business, contact us!