Artificial intelligence is not just something out of science fiction movies anymore. Many industries, including healthcare, are applying AI into their operations.
AI is helping to change healthcare by providing support in data gathering and decision-making. AI can solve problems the same way as a human brain, but faster and in greater quantities. The possibilities are endless when it comes to patient data, diagnosis, financial decision-making, and the list goes on.
Many healthcare professionals believe that AI is the future of innovation in healthcare, but getting everyone on board with a new tool can be daunting.
When one large health system went through the process of applying AI within its organization, it discovered that in order to integrate this new tool, there had to be trust and the ability to adapt to the culture of the healthcare organization.
The U.S. Department of Veterans Affairs (VA) is another health system developing and utilizing AI. The VA formed the National Artificial Intelligence Institute (NAII), which involves developing AI for research and development to help support veterans and their families. They utilize “AI Tech Sprints” to work with other organizations and companies to develop applications for AI that can help support veterans.
HIMSS reached out to the NAII to learn more about their process of applying AI to their organization and its benefits.
Rafael Fricks, PhD, Computer Engineer, NAII: We work with other organizations through AI Tech Sprints to see what they bring to the table and hone it so that it meets veterans’ needs. Some of these applications include a chat box that helps with patient experience, risk stratification algorithms, programs that would help optimize medication recommendations for individual patients, and efficiency improvements that use natural language processing to process text like the provider registry.
One area that is near and dear to the Department of Veterans Affairs is suicide prevention. The VA sets goals in this domain, and the organizations involved in the tech sprints will tackle them from different angles—from medical records, socioeconomic indicators, social activity, to even analyzing the sentiment of different conversations if they prove indicative. The goal is to catch potential decline early, well before it’s notable, guide a person toward counseling if needed, and steer them toward a happier and healthier place.
Gil Alterovitz, PhD, FACMI, FAMIA, Director, NAII: We see AI all around us whether it's driving and using a map or through social media recommendations. AI technology is one we see used in a number of different areas in healthcare. One example is that we can use it to make it easier to identify which benefits might apply to a veteran and help them access it faster.
During COVID-19, AI helped with risk prediction. We could use it to determine what the risk is for a person to get sick over time or pass away.
In imaging, it will review the images and highlight an area where the computer is suspicious that there might be something, and then the physician can focus in on the regions that are highlighted.
We are also interested in using AI to find ways to engage veterans who are not currently engaging.
Alterovitz: When you think about AI it’s certainly been evolutionary, it wasn’t just one decision. Each application’s approval is dependent on the use case. More recently we have been actively working on designing a strategy that works with nine principles on trustworthy AI within the federal government. It’s very important for AI to be trusted when it’s adopted. As we have use cases, it gets evaluated with those criteria. Items that don’t meet these are not done or retired. That’s the framework for thinking how AI is done overall in the VA.
Fricks: We have very extensive written criteria internally that are used to evaluate prototypes, particularly in Tech Sprints. Sprints provide a long window to test performance not just for accuracy, but for how that performance is retained when you switch to a new data set. We will look at an application and determine how well it performs, how it preserves data security, and if the result of the algorithm is explainable. We want to test that a solution generalizes; that it maintains accuracy from one data set to another.
We also look at privacy, and make sure patient data usage follows security best practices such as using HIPAA compliant systems. We can reduce risk by ensuring that for instance, when using the data an algorithm only accesses information that is relevant to the decision that is being made.
Alterovitz: When you think about implementing AI, it comes down to a number of different factors. I would give a range, depending on what underlying resources are needed. With COVID-19, things were sped up to get online within days to weeks. But things that require new and additional resources can take months to years.
For example, we have an AI for a clinical trial search program, which matches patients to trials they might qualify for. That program took a few months to create as far as the AI tech sprints, then another two to three months to get access to this production-level data, and then it was the scaling for the competing resources. It was less than a year from the idea to using it. It became one of the first five apps in the VA application store.
It depends on each use case, but we’re trying to make the longer end of things a little shorter.
Fricks: We’ve really been able to accelerate some important steps in vetting new ideas. There’s a wealth of ingenuity in the healthcare industry today. We had 60 different teams apply to the most recent sprint with very different capabilities. The sprint helped narrow it down to 32, but there are still many steps between building great prototypes and implementation.
Alterovitz: It’s really multidisciplinary, which involves work across different offices. One of the exciting things about it is that it brings people from different backgrounds, but at the same time it’s a challenge to make sure you have all the right people at the table to leverage all their capabilities.
Fricks: Some of the important stakeholders are the providers themselves. In vetting new ideas we find subject matter experts within the VA that can speak to patient care, and whether the proposal is a sound idea that they want to use in their practice. Provider buy-in from the beginning makes it much more likely the innovation will be a welcome addition clinically. Sometimes you can go through all this trouble and have something that is not used. Our goal is to prevent that from happening.
Alterovitz: AI has some things that are unique to it. How do you deal with ethical and other issues that can arise? You hear a voice on the phone, is it a person or not a person? Is this something you have to inform the patient? Questions like that are some of the things that are being looked at before it’s being deployed.
And with every new application, there is the adoption curve. There are early adopters and once it’s trusted, you get more. We spend time communicating what AI is and what it is not. We’ve started a community online so that people can learn about AI and see the possibilities of what it can do today.
Fricks: It’s definitely a team effort. We lean on other VA personnel to speak to how a solution might affect their practices. We want to make sure we have their buy-in early.
Another challenge is sorting through the overwhelming volume of ideas. During the sprints we meet with teams weekly and set milestones. We also aim to match each team with a subject matter expert that can provide one-on-one feedback. Finally, each prototype receives a full evaluation by a panel of four experts who provide an independent perspective on the idea. To work with 32 teams we brought on many clinician volunteers to consult. There are too many good ideas out there for any single person to find all of them.
Fricks: It helps to get involved early in the development phase—working toward a mutually agreed direction rather than tailoring after the fact. For rapid prototyping, it’s helpful to have an idea of the data you have on hand, or what data is publicly available. Ask in advance about your organization’s policies on privacy and security. As you move forward, if everyone agrees to the same standards and best practices you are less likely to have a bumpy ride down the road.
Alterovitz: Thinking and designing from the beginning with an eye toward trustworthiness and scaling is really helpful. We’re at a point in time right now where AI is new, whenever tech is new, it needs to be trusted before adopted. It needs to be done in such a way that it promotes adoption—from the beginning. If you do it later, it becomes much harder.
Fricks: To add to that, it’s important that researchers publish their methods and there is continuing education on these tools for providers. One of the things the NAII is working on is a workforce certification program, which will be shared through the AI@VA community. This will help us teach clinicians the different considerations for AI and give them a foundation in AI techniques. When a new process is deployed, we also launch an education campaign to explain why, how, and when to use it.
The views and opinions expressed in this content or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.
Hear how one company is improving surgical care outcomes by using innovative AI, as well as how innovation happens in all corners of the industry. Bora Chang, MD, CEO, and founder of KēlaHealth shares how the organization is using a combination of physician-led and product innovation to drive healthcare forward.