Skip to main content

Advertisement

Advertisement

ADVERTISEMENT

From the Field

Shifting the Paradigm: The Impact of AI at the Point of Care in Cancer Treatment

June 2024

J Clin Pathways. 2024;10(3):23-25. doi:10.25270/jcp.2024.05.03

Artificial intelligence (AI) is one of the most impactful technologies of the modern era, bringing with it the potential for a massive paradigm shift in the way the health care system interacts with data. The use cases for AI-based tools are nearly limitless, from streamlining revenue cycle management and reducing documentation burden to predicting rising clinical risks and directly assisting with diagnostic activities.

AI is already being deployed in many of these areas. But as with most major tech­nical advancements, it has been following the predictable pattern of the Gartner Hype Cycle,1 which includes an early period of rapidly inflated expectations that can quickly lead to disappointment if it fails to fully deliver on its initial promises. Most experts would currently place AI at the peak of inflated expectations, a point in the hype cycle that brings a series of some successes along with initial concerns and failures. While we have seen a tremendous amount of innovation in this space and significant capital investment, we are also starting to see a significant increase in con­cerns around the technology and how it will be used in health care. Organizations are emerging to address these issues, such as the Coalition for Health AI (CHAI),2 which has started to outline guidelines and guardrails in the space to avoid AI in health care experiencing the “trough of disillusionment.”1

Understanding this trajectory for emerging technology is crucial in health care, where the implications of missteps are incredibly high. This is especially important in complex clinical areas, like oncology, where scientific progress has been rapidly outpacing the human capacity to keep up with the latest breakthroughs and clini­cians have been eager to adopt technologies that help them make the best possible decisions with their patients.

Developing a clear sense of what AI really is and how it can assist oncologists with decision-making at the point of care is essential for maximizing the utility of present-day technology while laying the groundwork for the incredible possibilities of the future.

Real-World Use Cases at the Point of Care for Cancer Treatment

Oncologists are leveraging AI tools in a variety of ways throughout the diagnostic, treatment, and follow-up processes, including identifying suspicious items in imaging, developing risk scores to assist with treatment decisions, selecting precise therapies based on lab and pathology findings, and processing patient-generated data to detect potential adverse events.

In addition to assisting with purely clinical tasks, AI is also having a major impact on how oncologists engage with the enormous volumes of administrative, financial, and academic data required to make the best possible decision for each indi­vidual at each point in time.

For example, Flatiron Assist (FA), our clinical decision sup­port platform that surfaces National Comprehensive Cancer Network (NCCN) guidelines and site-preferred pathways, le­verages machine learning (ML) tools to support regimen map­ping from current regimen libraries, saving the administrative team up to half an hour each time they need to map a regimen into the clinical decision support platform.

More specifically, given a regimen authored by an Onco­EMR practice, FA is able to suggest the most relevant NCCN pathways to map to by (1) constructing a feature set about all practice-authored and NCCN regimens, leveraging both in-house ML architectures and openAI’s GPT 3.5 Turbo and (2) le­veraging that feature set to rank order NCCN regimens for any given practice-authored regimen. Using AI to take a first pass at the matching process, while giving human experts the op­portunity to review the accuracy of the result, can reduce cog­nitive burdens and free up resources for other essential clinical activities. To date, FA has supported the mapping of 791 regi­mens using ML, resulting in a total time savings of 395.5 hours, equivalent to 9 weeks of full-time work (40 hours per week).

The Regulatory Landscape and its Role in Protecting Patients

As with many new and disruptive technologies, federal regula­tors have wasted no time in trying to develop plans to appro­priately regulate these tools. For example, the recently finalized HTI-1 rule from the Office of the National Coordinator for Health Information Technology and Department of Health and Human Services directly impacts certified health care informa­tion technology vendors and aims to regulate what they describe as “predictive decision support intervention (DSI)” tools.3 These are tools that support decision-making by learning or deriving relationships to produce an output. This regulation focuses on transparency around source attributions for the tools being built and also allows for a closed-loop communication system where­by users of the DSI can provide feedback to developers.

However, the majority of regulatory efforts at this time are falling to the US Food and Drug Administration (FDA), which is responsible for regulating solutions that fall under the cate­gory of Software as a Medical Device (SaMD).4 Currently, the SaMD regulatory pathway is intended for software products that are not directly associated with a medical device and instead are used independently and specifically for medical purposes. While the original definitions did not include references to AI, recent publications from the FDA5 discuss the coordinated and more comprehensive approach between the various centers of the FDA to develop a regulatory framework for AI. The main areas of focus include fostering collaboration to safeguard public health; advancing regulatory approaches that still foster innova­tion; promoting standards, guidelines, and best practices; and supporting research to evaluate and monitor AI performance.

Other industry stakeholders are also publishing guidance on this topic in the hopes of driving federal agencies to create mean­ingful regulations. In the US, entities like CHAI6 have published implementation and user guides for AI in health care, building off of work from the National Institute of Standards and Tech­nology (NIST), which created a robust risk-management frame­work for this topic.7 The NIST framework works to balance in­novation and risk tolerance with tangible risks to public health.

Evaluating AI Models for Today and Tomorrow in Oncology Care

AI will continue to evolve rapidly and push the boundaries of what’s possible with the assistance of sophisticated digital tools. Careful evaluation of AI tools and the companies behind them will prevent oncology practices from getting caught up in the potential hype and ensure that new models provide a tangible return on investment for internal users and patients alike.

AI-specific quality frameworks are now emerging, based off of traditional methodologies, to assist with evaluating these de­veloping technologies. Since this is a new area of exploration for health care, the National Academy of Medicine recently published a draft Code of Conduct framework8 that outlines the importance of themes like safety, equity, effectiveness, ac­cessibility, and transparency.

While these principles are crucial, it is important for health care leaders to be equipped with a more specific framework on how they can assess these tools. To supplement these efforts, we have developed a series of prompts for stakeholders to uti­lize as they evaluate a potential new solution to add to their technology suite.

What Is the Use Case?

Be clear about who is meant to use this product and in what situation. Make sure to consider how a new tool will affect the existing workflow and downstream activities that may be impacted by altering the decision-making process. In under­standing the proposed workflow and associated risk around the workflow, health care leaders can take a risk-based approach to how they should assess the tool and its functionality.

How Was the Model Data Validated and How Will it Be Monitored?

Ask questions about where the data is coming from, what data quality standards are in place, what methods are being used to identify potential variances or biases in the results, and how developers are monitoring quality and safety concerns. Under­stand if the output of the model was validated against a high-quality reference standard.

The potential for AI to exacerbate existing inequities in health care is a real risk that has been demonstrated in the litera­ture. For example, one recent study showed that an algorithm used widely in the US health care system to guide health deci­sions potentially furthered existing inequities.9 The algorithm identified population-level risks of patients and was consis­tently assigning Black patients a lower risk level than their ac­tual health status, leading to a lack of resources for their given status. It is important to ask potential vendors how they assess, measure, and address potential bias in their AI systems in order to ensure issues like this do not persist and propagate through the system.

How Is the Model “Refreshed”?

Ask about how often the dataset is updated and augmented to ensure timeliness, accuracy, and completeness, especially in use cases where clinical knowledge changes quickly or new regula­tions and guidelines are published on a regular basis.

What Is the Implementation Plan for the Tool?

Be certain to proactively collaborate with all relevant stake­holders, including end users who may require targeted educa­tion and training. Work closely with the developer and with end users to ensure appropriate use of the tool and create a non­punitive feedback mechanism to raise any issues or concerns.

Conclusion

As AI continues to change the paradigm of clinical care, oncol­ogists can maintain their well-founded optimism about how AI can augment point-of-care activities by asking the right ques­tions and making informed decisions about how to leverage AI to enhance their processes. By using this knowledge, oncology leaders can contribute to the health care system’s ongoing expe­rience with AI while seeing measurable benefits for their own clinics, providers, and patients.

Author Information

Authors: 

 Stephen Speicher, MD, MS; Will Shapiro; Taylor Dias-Foundas

Affiliations: 

Flatiron Health, New York, NY

Address correspondence to: 

Stephen Speicher, MD, MS

Flatiron Health

233 Spring St

New York, NY 10013

Email: stephen.speicher@flatiron.com

Disclosures: 

S.S., W.S., and T.D.-F. reported being employees of and owning stock in Flatiron Health.

References

1. King S, Judhi P. Assessing generative A.I. through the lens of the 2023 Gartner Hype Cycle for emerging technologies: a collaborative autoethnography. Front Educ. Publi­shed online December 1, 2023. doi:10.3389/feduc.2023.1300391

2. Coalition for Health AI. Accessed May 6, 2024. https://www.coalitionforhealthai.org/

3. Department of Health and Human Services. Health data, technology, and interopera­bility: certification program updates, algorithm transparency, and information sharing. Federal Register. January 1, 2024. Accessed May 6, 2024. https://www.federalregister. gov/documents/2024/01/09/2023-28857/health-data-technology-and-interoperabili­ty-certification-program-updates-algorithm-transparency-and

4. US Food and Drug Administration. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD): discussion paper and request for feedback. April 2, 2019. Accessed May 6, 2024. https://www.fda.gov/media/122535/download?attachment

5. US Food and Drug Administration. Artificial intelligence & medical products: How CBER, CDER, CDRH, and OCP are working together. Updated March 15, 2024. Acces­sed May 6, 2024. https://www.fda.gov/media/177030/download?attachment

6. Coalition for Health AI. Blueprint for trustworthy AI implementation guidance and as­surance for health. April 4, 2023. Accessed May 6, 2024. https://www.coalitionforheal­thai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf

7. National Institute of Standards and Technology. Artificial intelligence risk management framework (AI RMF 1.0). January 2023. Accessed May 6, 2024. https://nvlpubs.nist. gov/nistpubs/ai/NIST.AI.100-1.pdf

8. Adams L, Fontaine E, Lin S, et al. Artificial intelligence in health, health care, and bio­medical science: an AI code of conduct principles and commitments discussion draft. National Academy of Medicine. April 8, 2024. Accessed May 6, 2024. https://nam. edu/artificial-intelligence-in-health-health-care-and-biomedical-science-an-ai-co­de-of-conduct-principles-and-commitments-discussion-draft/

9. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algo­rithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342

Advertisement

Advertisement

Advertisement