Blog

Policy Needs for Effective Artificial Intelligence Innovation, Regulation and Adoption

Doctor talking with patients

Artificial intelligence (AI) systems and applications are increasingly ubiquitous in 21st century life. AI is now poised to disrupt healthcare, with the potential to improve patient outcomes, reduce costs, and enhance work-life balance for healthcare providers.

Integrating AI into healthcare safely and effectively will need to be a careful process. Silicon Valley’s ethos of “move fast and break things” is unacceptable if it means “move fast and break people.” Policymakers and stakeholders must strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health.

In recent years, the U.S. Food and Drug Administration (FDA) has indicated that it recognizes the unique, iterative nature of these novel health technologies and plans to focus on developers’ practices moving forward in order to support rapid release of software that falls under their authority. On January 7, FDA’s Center for Devices and Radiological Health (CDRH) released their updated Software Precertification (Pre-Cert) Pilot Program working model 1.0, a test plan, and a regulatory framework. The goal of the Pre-Cert program is regulate digital health technologies in ways that foster innovation while protecting patient safety.

RELATED: Policy, Process Changes Needed to Safely Integrate AI into Clinical Workflows

It is encouraging to see the FDA’s steady work innovating their regulatory approach in this space, while communicating its evolving thought process and welcoming feedback from the public. Harnessing the potential of AI integration in healthcare requires additional clarity around regulation, as well as simplified access to healthcare data while maintaining clear privacy protections for patients, and best practices around building effective solutions and demonstrating value.

What is Artificial Intelligence?

AI is an umbrella term for machines that can perform one or more task(s) that are normally done by humans. Nowadays when people refer to AI, they generally mean data-based AI, also known as machine learning. This type of AI analyzes large amounts of data using algorithms to learn how to do tasks without being explicitly programmed. Different types of algorithms are more suitable for different types of problems, similar to how certain statistical methods are more appropriate for certain types of analyses. In contrast, rules-based AI uses previously validated information (e.g., clinical guidelines or other published studies) to set up a flow chart of individual steps that lead to clinical recommendations.

What is the Potential for Artificial Intelligence in Clinical Decision Support?

Clinical decision support provides “clinicians, staff, patients, or other individuals with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care,” and is used to support healthcare providers in diagnosis, treatment decisions, and population health management.

AI may be able to improve current clinical decision support software in important ways. For example, it can reduce the administrative burden on providers by using natural language processing to clean and structure free text clinical notes in electronic health records (EHRs) so clinicians only need to enter data once.

Data-based AI-enabled software can also combine enormous amounts of data from various sources. These sources might include medical images, EHR data, sensor measurements, environmental data, and even data from consumer devices, such as activity trackers, to provide novel insights and predictors of diagnosis and treatment options. AI also could be used to customize the timing and formatting of clinical recommendations, personalizing workflows to individual clinicians.

Clinical decision support software that uses AI to do one or more of these tasks can face a variety of regulatory and legal hurdles. An important delineator in legal and regulatory risk assessments is whether the AI acts independently (i.e., the software makes diagnostic or treatment decisions that are automatically implemented or that the human user is not equipped to evaluate) or whether it augments or supports clinical decision-making, where the software makes recommendations but the final decisions are made by a healthcare professional.

Near-Term Priorities for Policymakers

Regulatory Clarity
This issue of explainability, defined as a human-comprehensible explanation of how software comes to a specific recommendation, gives way to one of the most important questions around risk assessment of AI-enabled software. Many software systems, particularly those developed with data-based AI methods, are “black box systems” that that do not explain how the input data is analyzed in order to come to a recommendation. This can be because the algorithm/model is too complex to be understood by humans or because the functionality is considered proprietary.

The 21st Century Cures Act removed certain types of clinical decision support software from FDA authority, with one of the requirements being that the software enables healthcare professionals “to independently review the basis for such recommendations that such software presents so that it is not the intent that such healthcare professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.”

The FDA’s December 2017 draft guidance clarified that software developed with rules-based AI that used publically available evidence, such as clinical practice guidelines, published literature, FDA-approved labels, etc., would meet this standard. However, there was less clarity about how this standard may apply to software that utilizes proprietary algorithms. Even if transparency is desired, not all AI techniques can be explained in comprehensible ways, depending on the volume of data inputs employed to come to a recommendation and the machine learning techniques used.

If the software is determined to be subject to FDA authority because healthcare professionals can’t fully review the basis for recommendations, would some degree of explainability affect FDA assessments of risk to the patient when providers use these software products? According to the International Medical Device Regulatory Forum’s guidance on Software as a Medical Device (SaMD), which was adopted by FDA in 2017, software that acts automatically would be considered to be of higher risk than software that acts as a support or resource for a clinician’s decision-making. However, more guidance is needed within these classifications.

Data Access and Privacy
A critical need for innovators in this space is access to clinical data to develop software with data-based AI techniques. This support may take the form of access to large, representative, and curated high-quality clinical datasets that can be mined with big data AI techniques to develop highly personalized clinical guidelines that can be incorporated into rules-based AI software.

Data-based AI also requires large volumes of clinical data to “train” the software, but this data must consist of uncurated “real-world” data that resembles the quality of data available for the software when deployed for use (in the same way you wouldn’t want to train an autonomous vehicle only on an empty racetrack when it was going to expected to drive down crowded city streets). To encourage innovation in the data-based AI space, improving data standards and increasing the interoperability of data, while upholding patient privacy protections, will be critical. Collecting reliable outcome data will be a particular challenge that is important to both developing software and evaluating performance over time for regulatory surveillance.

Demonstrating Value
Finally, best practices and case studies are needed to help innovators build effective solutions and demonstrate the value of those solutions. Adoption of AI-enabled clinical decision support software will always be a two-stage process in which the provider system decides to adopt the software into the workflow and the individual physicians trust the software enough to have it augment their clinical decision-making. As such, developers should engage a range of perspectives into the development process, including front-line physicians equipped with therapeutic expertise and knowledge of the workflow. Best practices regarding how to do this ethically will be essential.

AI-enabled clinical decision support software must be able to demonstrate its impact on improving provider system efficiency and better enabling providers to meet key outcome and cost measures that are tied to their reimbursement from payers. Further driving adoption and increasing the ROI for these technologies also will include public and private coverage and reimbursement to the provider systems. A useful first step would be to establish which clinical decision support software features and performance outcomes will be most valued by payers, as well as the types of evidence that will be required to prove performance gains.

Why It Matters

As a far-reaching, rapidly emerging technology, AI-enabled clinical decision support software has the potential to enhance public health and improve outcomes. When clinicians are able to arrive at a correct diagnosis faster using AI-enabled clinical decision support software, residual costs associated with unnecessary testing and treatments are curbed and patient quality-of-life is elevated by reducing the amount of pain and suffering brought on by initiating an unnecessary treatment earlier.

In their recent release on the Pre-Cert program, the FDA continues to advance their thinking on risk assessment, algorithm disclosure, clinical validation, and post-market data collection. However, other stakeholders in the system also need to come to consensus on best practices and standards to address the evidentiary needs for increased adoption of these technologies, effective patient risk assessment of these products, and to ensure AI systems are ethically trained.

The views and opinions expressed in this blog or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.

Help Reform Health Globally through Information and Technology

Our vision is a world where no matter who you are or where you live, there is access to everything you need to achieve well-being. Help reimagine the health ecosystem and guide the way to a brighter future.

Join the Reformation. Become a HIMSS Member.
Be part of a community that is inspiring and challenging the industry. Learn more | Join or renew

Updated April 5, 2019