What does the advent of generative AI mean for the FDA’s legal framework?

Life Sciences | By Laura DiAngelo, MPH

Oct. 29, 2024

The FDA is flagging concerns that its existing legal and regulatory framework might not accommodate novel technologies, with generative AI (GenAI) a particular concern. In new documents prepared for an upcoming Digital Health Advisory Committee meeting, the FDA addresses such holes and contradictions in its existing framework. Here, AgencyIQ walks through implications for future digital health policy.

The FDA has long struggled to regulate software as a medical device. With generative AI, FDA’s existing frameworks may be at their breaking point

  • The way that FDA regulates products typically focuses on its intended use. This means that artificial intelligence (AI) methods and products may fall into regulated categories, depending on what they’re being used for – and by whom. AI algorithms themselves can be a medical device, a component of a medical device, a component of a companion product (like an app intended to be used with a drug), a device type exempted from FDA regulation, and more. [ See here for a complete list of the AI/machine learning (ML)-enabled medical devices authorized by FDA to date.] It also means that the use of AI in regulated contexts will need to comply with the FDA regulations which typically apply in those contexts, including regulations related to oversight of clinical research of drug products (e.g., using AI to inform trial participant stratification) and/or manufacturing.
  • FDA’s regulation of a product will also depend on the context in which a device is used. First of all, consider AI that meets the definition of a medical device, meaning it is “an instrument, apparatus, implement, machine, contrivance, implant… intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.” These products are regulated as medical devices, meaning they must be authorized by the FDA under the medical device pre-market pathways, meet the agency’s expectations for valid scientific evidence (VSE) supporting their use, and comply with medical device post-market requirements. For an AI-enabled tool used in developing a medical product, the FDA will require the sponsor to submit information on the AI in the same way it would other clinical trial data. The sponsor will be required to provide data justifying use of the AI within the context of the development program.
  • Artificial intelligence is likely to be useful to life sciences companies, posing significant problems for the FDA. Since the FDA’s approach is technology- and method-agnostic, anything that meets the definition of a regulated product or is used in a regulated context is expected to comply with the agency’s existing regulatory requirements. Instead of adapting regulations to novel technologies, the FDA tends to issue guidance that offers an interpretation of how it will apply the regulatory requirements that are already on the book to novel technologies, so novel technologies have to be made to fit into these existing regulations.
  • Now, the FDA’s Digital Health Advisory Committee (DHAC) will meet in November to discuss an important new technology, generative AI (GenAI). Per the FDA’s briefing documents for the meeting, the agency’s legal frameworks do not fit well with the practicalities of how GenAI is being used. In this piece, AgencyIQ wants to explain how things work now and why FDA thinks its existing frameworks aren’t fit for purpose.

Before we talk about why FDA thinks its frameworks for AI regulation don’t work well, let’s look at how artificial intelligence is currently regulated under existing U.S. federal frameworks

  • At the U.S. federal level, AI is defined as: “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” GenAI is “The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”
  • The definitions above come from Executive Order (EO) 14110, President JOE BIDEN’s Executive Order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” issued in November 2023. That document laid out a high-level roadmap for AI-related policy development in the U.S., directing various federal departments and agencies to work on developing sector-specific strategies for both using and regulating AI at the federal level.
  • A few toplines from that EO: The National Institute of Standards and Technology (NIST) plays a key role in the federal AI response as directed under the Order, tasked with developing the standards, guidelines and definitions related to AI regulation and use at the federal level. The EO directs NIST to develop the “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.” These activities then support the work of other agencies and departments under the EO, with NIST’s work helping build a sector-agnostic baseline on AI regulation across the federal government.
  • Over the last year, NIST has been prolific in developing guidance and resources on AI. In May 2024, it published four guidance documents predominantly focused on GenAI. [Read AgencyIQ’s analysis of those guidance documents here.] On July 26, it announced that it was finalizing three of those documents: the GenAI-focused guidance (NIST AI 600-1), a supplement to its Secure Software Development Framework (SSDF) on Reducing Threats to the Data Used to Train AI Systems, and its plan for Global Engagement on AI Standards (NIST AI 100-5). This left only one document from the May batch still in draft format, the document on reducing risk from synthetic content (NIST AI 100-4).
  • The EO also describes a system to regulate what it calls “dual-use foundation models” in the U.S. Under the EO, these are defined as: “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters” such as a model that could bolster access to chemical, biological, radiological or nuclear (CBRN) weapons or permit “the evasion of human control or oversight through means of deception or obfuscation.” These AI models would face a higher level of federal oversight under the EO, with developers required to provide updates and performance information to the Commerce Department.

For life sciences-regulated industries, the FDA has been grappling with how to apply its legal frameworks to AI technologies

  • To date, the FDA has issued more concrete guidance documents on AI as a medical device than it has on AI in drug product development, since it has more extensive regulatory experience with authorizing AI-enabled medical devices. In 2023, the agency issued initial draft guidance on a type of submission leveraging Pre-determined change control plans (PCCPs) in medical device submissions for AI-enabled products. This mechanism allows sponsors to pre-specify certain adaptations or changes to their products that they plan to enact after the product launches. PCCPs give sponsors flexibility, since they would otherwise need to submit entirely new marketing applications for AI products as they changed over time. Notably, a final version of that guidance document is currently pending administrative review. The FDA’s device center is also planning new draft guidance on “lifecycle management” for AI-enabled medical devices, per its 2025 guidance agenda.
  • New guidance on the use of AI in drug development is on the way – but hasn’t been published just yet. In 2023, the FDA issued a discussion paper seeking comment on the role that AI/machine learning (ML) could play in the field of drug development. The paper identified a robust list of use cases for AI/ML in drug development across different research stages, from discovery to clinical and non-clinical research. In remarks at a recent workshop, Center for Drug Evaluation and Research (CDER) Director PATRIZIA CAVAZZONI previewed what the new guidance would look like, describing a multi-modal risk-based approach that would account for both the risk level of the context of use, as well as the risk of the model itself. As she put it: “That risk-based approach will be first and foremost centered on the context of use and then, in conjunction with that, will be really anchored by our assessment of the model risk, which will be fundamentally predicated on the model influence and model consequence. So, we will plan to take a sort of multimodal risk-based approach as we think about how we will be reviewing AI and machine learning elements in submissions and anything that comes our way that is part of a program.” [See AgencyIQ’s full analysis of that meeting here.]
  • The FDA has several initiatives underway to get a better handle on AI in regulated contexts. These include convening an AI Council within CDER to coordinate that Center’s work on AI policy, and to meet requirements under the EO. CDER has also established a new Quantitative Medicine Center of Excellence (QMCoE) to “facilitate and coordinate” on novel QM topics, including AI. The Center also announced a new initiative in June 2024 entitled the Emerging Drug Safety Technology Program (EDSTP) to focus specifically on the use of “artificial intelligence (AI) and other emerging technologies in pharmacovigilance (PV).” From CDRH’s side, the agency is continuing to work on its new guidance documents (finalizing the PCCP guidance and issuing a draft guidance on lifecycle management for AI-enabled products). The device center is, as noted above, already well-versed in authorizing new AI products.
  • The DHAC is the newest advisory committee on FDA’s roster. Although staffed by CDRH personnel, the remit for the committee comprises a laundry list of the FDA’s high-priority issues for emerging technologies across product types. The committee will adress AI/ML, digital health technologies (DHTs), digital therapeutics, patient-generated health data and real-world evidence (RWE).
  • FDA is convening DHAC for the first time in November 2024, a year after the new committee was announced. The meeting will focus specifically on GenAI as a medical device. Instead of looking at a particular application for an individual device, the FDA is instead asking the committee for feedback on its understanding of how the agency would go about regulating GenAI as a medical device, and the potential limitations and challenges it might face.

Documents from FDA’s upcoming meeting of its Digital Health Advisory Committee (DHAC) offer some new insights into what the FDA sees as its limitations and next steps

  • The FDA published meeting materials for the Nov. 20-21 meeting about a month in advance. This is unusual for advisory committee meeting materials, which the agency usually published just two business days ahead of these meetings. The agency issued an executive summary for the meeting that reads like a white paper on the topic of GenAI under FDA’s statutory and regulatory frameworks, as well as discussion questions for the Committee.
  • A key top line: FDA doesn’t seem to think that its existing legal and regulatory frameworks are entirely sufficient for GenAI. “The rapid rise of interest in GenAI may present challenges to FDA’s laws and regulations,” the agency acknowledged early in the DHAC documents. The Executive Summary continues: “there are unique characteristics of GenAI that, as part of a product’s design without adequate risk controls, can introduce uncertainty in the product’s output and can make it difficult to determine the bounds of a product’s intended use, and therefore, whether it meets the definition of a device and is the focus of FDA’s device regulatory oversight.”
  • To clarify: the FDA’s legal definition of medical device, cited above, relies on the product’s intended use, but inherent characteristics of GenAI mean that it’s not entirely clear whether GenAI really fit into the FDA’s requirements related to intended use. Further, it’s not clear whether a product that consisted of or used GenAI could potentially move in and out of the FDA’s device definition through the course of its lifecycle, or how a regulator could accommodate that flow.
  • Software as a medical device (SaMD) definitions under the 21st Century Cures Act present more complexity. In 2016, Congress passed the landmark 21st Century Cures Act which, among other legal changes, established a definition of SaMD that specifically excluded some software functions from the legal definition of a medical device, taking them out of the FDA’s jurisdiction.
  • The Act created five specific carve-outs from FDA-regulated SaMD: (1) software functions that provide administrative support for a health care facility, such as scheduling software; (2) software intended to maintain or encourage a healthy lifestyle that’s unrelated to the diagnosis, cure, mitigation, prevention, or treatment of a disease or condition (e.g., My Fitness Pal, Apple’s Health app); (3) software intended to serve as electronic patient records within the provision of care, and specifically products that are electronic health records (EHRs; this carve-out applies so long as these software products do not interpret or analyze those records to aid in “diagnosis, cure, mitigation, prevention or treatment”); (4) software intended to transfer, store, convert or display lab results (again, so long as the product does not interpret or analyze the test or device data); and (5) software that can inform care. This fifth carve-out has four sub-criteria, which collectively define “clinical decision support” technologies. [Read AgencyIQ’s full breakdown of the Cures Act’s SaMD definitions and carve outs here].
  • According to the DHAC executive summary, GenAI specifically runs the risk of falling afoul of those carve-outs. The “ability of GenAI to tackle diverse, new, and complex tasks may contribute to uncertainty around the limits of a device’s output,” the agency explains. Because GenAI will, by definition, “generate ‘new information,’ and some implementations may even intentionally leverage this capacity to generate more ‘creative’ responses,” a complex situation could occur. It’s possible that the result may be that the GenAI developed for one of the carve-out uses listed above could generate content, or a product, that may meet the device definition. The FDA provides the example of a GenAI product intended to summarize a patient visit that may add a hallucination “providing a new diagnosis that was not raised during the interaction.” At that point, while the GenAI would have been intended to simply summarize notes, its editorial insertion of a diagnosis means that it would technically meet the definition of a diagnostic product – and therefore be subject to FDA’s oversight as a medical device.
  • The unaddressed (and problematic) risk is that the GenAI’s hallucination and diagnosis could occur completely outside of the agency’s view or awareness.
  • The agency also points to some complexity around “foundation models.” As described above, dual-use foundation models are subject to scrutiny by the Commerce Department under the EO. However, these types of models may also serve as the foundation of a medical device – foundation models “are typically not created for an individual product, nor are they generally intended for use as a device” per the legal definition, says FDA. Instead, the application developers who are developing a device may leverage the foundation model. In this case, the FDA points to its existing policy on the use of off-the-shelf software for use in a medical device, noting that “an analogous lack of software lifecycle control over an incorporated foundation model may raise certain challenges” both for the developer in understanding the behavior of their GenAI device and for regulators reviewing the product’s application. Further, “it may be difficult to develop an accurate device description or characterization of the GenAI-enabled device if little is known about the base foundation model.” However, it’s not clear how either the developer or FDA itself would get this information.
  • The meeting documents go into further depth about difficulties adapting the FDA’s consideration of pre-market evidence, risk, and post-market surveillance to GenAI tools. Pre-market evidence relies on “a general understanding of the device and its design” and how those relate to the what the device is intended to do. As noted above, the agency has concerns about what “intended use” means for GenAI products, and also how “current methodologies for performance evaluation” could be used to help regulators reach a meaningful understanding of how a GenAI product actually performs. Per FDA, “new methodologies may also need to be developed to evaluate the performance” of GenAI products.
  • The evolution of these products post-market will also likely require new methods, the FDA notes, since “it could be particularly challenging” to oversee them within the existing regulatory schema. The FDA’s understanding of the risk of a regulated medical product, or any regulated activity, influences what information and validation processes it will require sponsors to provide. For GenAI, the agency cites its own lack of clarity about how it can consider the risks of GenAI to inform regulatory oversight. For example, the agency currently maintains a policy of “enforcement discretion” for certain “low risk” digital health applications that do meet the definition of a medical device, but it’s unclear how the integration of GenAI would adjust that framework; notably, that guidance is due for a revision in 2025.

What does this all mean?

  • These concerns are not device-specific: Novel technologies are increasingly “extend[ing] beyond the walls of CDRH,” as FDA Digital Health Center of Excellence (DHCoE) Deputy Director SONJA FULMER put it in February 2024, speaking at the Food and Drug Law Institute (FDLI) digital health conference. With CDER working on guidance and new projects on the use of AI in drug spaces, many of the concerns and legal questions that CDRH has identified are likely to show up there as well. It’s becoming clear that the FDA is going to have to effect an agency-wide rethink of its own legal framework and its expectations for data and evidence, both for AI, and for GenAI in particular.
  • Is FDA asking for new legal authority? Not right now. The agency “has committed to developing regulatory approaches for these devices using current authorities as well as exploring options that may require new authorities,” according to the DHAC materials. In other words, the FDA is looking how the AI-enabled technologies can fit into its regulatory systems, but also flagging places where existing systems might not exactly work well. Speaking at an Oct. 28, 2024 event, Digital Health Center of Excellence (DHCoE) Director TROY TAZBAZ further explained: “So we all have statutory authorities, but the question is not whether we have those… the question is, can we apply it to this new paradigm of type of product that is coming to the market?”
  • Even without an adjusted legal paradigm, FDA is already working on AI and GenAI issues. For now, “there’s a lot of frogs that are being sent our way, and we have to kiss all of them and validate whether they’re going to actually turn into [a] prince – and the problem with that is that you can’t do that in a highly risky environment where you’re applying it,” said Tazbaz at the October event. He pointed to the advent of technologies like ChatGPT as the impetus for renewed regulatory scrutiny on the legal frameworks, as widespread interest in these technologies has put development – and therefore FDA’s regulatory stakes – into overdrive. [See AgencyIQ’s 2023 analysis on ChatGPT as a medical device, which previewed many of these issues.]
  • The agency has asked for new authority on digital health processes before, notably under the auspices of the Pre-Certification pilot (Pre-Cert) for SaMD. While that program never made it off the ground, it could inform what, exactly, the FDA will ask for – and how Congress might react to the ask. Former CDRH Director JEFF SHUREN had previously discussed a potential novel medical device market access pathway, the Voluntary Alternative Pathway (VAP). But even a more flexible regulatory pathway wouldn’t address some of the questions raised by the FDA in its DHAC materials, including understanding how risk should be considered and what to do about the nebulous “intended use” of a GenAI with scope might that might shift (perhaps invisibly) over time.
  • And a final note: These policies are likely to extend not just beyond the walls of CDRH, but beyond the walls of FDA altogether. As directed in the EO, FDA’s parent Department of Health and Human Services (HHS) is developing its own AI policy strategy; in July, HHS re-organized its own AI policy work by elevating the Office of the National Coordinator for Health Information Technology (ONC) into an Assistant Secretary (of HHS) office: The Assistant Secretary for Technology Policy (ASTP). ONC-ASTP, notably, is the federal entity that oversees much of the technology that is carved out of the device definition under the Cures Act, including EHR technologies. With ONC-ASTP both tasked with directing the Department-level AI strategy, and also serving as FDA’s partner in regulating individual health IT products that are operating between the definitions of medical device and non-device functions under the Cures Act, the alignment between these two entities – as well as NIST at the broader federal level – will be paramount going forward.

To contact the author of this item, please email Laura DiAngelo ( ldiangelo@agencyiq.com).
To contact the editor of this item, please email Kari Oakes ( koakes@agencyiq.com).

Key Documents and Dates

Get an insider’s view on regulatory movements.

Sign up for AgencyIQ’s newsletters to receive exclusive regulatory updates and analysis impacting the life sciences or chemical industry.

Copy link
Powered by Social Snap