ADVERTISEMENT
Transcript: Artificial Intelligence-Driven Clinical Decision Support for Cancer Care: Where Are We Now?
Transcript:
Gordon Kuntz: Welcome to Oncology Innovations, a Journal of Clinical Pathways podcast focused on candid discussions with innovators, aiming to advance quality and value to the cancer care ecosystem.
I'm your host, Gordon Kuntz. I'm a consultant with almost 20 years experience in oncology clinical pathways in the business of oncology. I've worked with oncology practices, pharma, payers, GPO, and pathway developers. Basically every aspect of the oncology ecosystem. And I'm really excited about today's podcast.
We're joined today by Carole Tremonti, VP of clinical strategy with Project Ronin. Project Ronin is an AI driven EMR and decision support platform for oncology. I've had the pleasure of knowing Carole for several years, having met a few years ago when she was head of product development for Dana-Farber Cancer Institute's pathways program, where she led their focus on measurement of pathways use and outcomes. Carole is also a frequent speaker at the oncology clinical pathways Congress. Welcome, Carole.
Carole Tremonti: Thank you. I'm happy to be here today, Gordon.
Gordon Kuntz: Great. Today we're going to be exploring decision support and artificial intelligence. I am so interested in this topic. Of course, IBM Watson launched an AI driven decision support tool in conjunction with Memorial Sloan Kettering with great fanfare a few years back, at least on the front end. So I'm very interested in hearing your perspective on these technologies and the use cases now.
Let's start off with some definitions.
Carole Tremonti: Sure.
Gordon Kuntz: How would you define clinical decision support, and then how do you define artificial intelligence in your world?
Carole Tremonti: Yep, absolutely. So they go together to very much and very appropriately. I like the CDC's definition of clinical decision support, which are essentially CDSS systems, how we call them, are computer based programs that analyze data within an electronic health record to provide prompts, reminders, suggestions to aid in the delivery of health care. So that's great. How do you do that? And that's where the part where artificial intelligence comes in, and AI in health care is really an umbrella term that describes the application of machine learning algorithms, and other cognitive technologies in medical settings. So to put it very simply, AI is when computers and other machines mimic how our brains think, and are actually capable of learning, thinking, and making decisions or taking action. And they do this by learning patterns repeatedly, through analyzing massive amounts of data.
Gordon Kuntz: Excellent. So how do you see clinical decision support being used in clinical care? And before you answer, we obviously have had pathways around for 15-20 years, how do you see that happening and how do you see that evolving?
Carole Tremonti: Yeah, I think it's critical, honestly. I think clinical decision support is critical in clinical care for a number of reasons. The first is, for us, data's everywhere. The way we make decisions about everything is now we're really flooded with a lot of information, and never more so than in health care. And why that's critical is you and I can take five facts into our brain to make a decision. After that, poof, we sort of explode. Its overload. But take cancer for example. To make an accurate diagnosis of a cancer condition requires the simulation of over 100 facts. We can't do it. We really cognitively can't do it. So it's the first step in which clinical decision support is helpful, especially, and again, thinking about cancer. Sixty-five percent of cancer cases are diagnosed and treated in the community setting, and a community oncologist sees multiple types of cancer, so it's very difficult for any individual, even if you're a specialist, to know the information. But when you're a generalist, to know what do I need to know about any given cancer condition? Take that to cardiology, pulmonology, diabetes, mental health. It exacerbates itself. So evermore, is there an opportunity for decision support especially, as the rate of our publications are increasing substantively.
Gordon Kuntz: Right. In your experience, what are the barriers to adoption of clinical decision support? I know you've been involved with Dana-Farber previously, and certainly saw them implement, at the time, via pathways, and do a lot of work on that. So what do you see as the barriers?
Carole Tremonti: There are so many. I think the first is fear. We take it to thinking about the Terminator, or The Matrix, machines taking over the world and doing everything for us. So specifically in medicine, which is an art, there's absolutely a fear to machines. But machines can only take us so far. So I think that that's the second thing that becomes a barrier is a lack of really understanding. In no way are clinical decision support tools ever meant to replace the intelligence and understanding and vast knowledge of an individual clinician. They're meant to be there to augment it, to bring things together. So a lack of education and a lack of understanding of how they work and what they bring into account to help to drive or steer someone into one direction or another, I think really is at the front end of it. And we, those of us who work in this space or started this space, have been in an uphill battle the entire way trying to do this. When we started it at Dana-Farber, we probably spent eight years in this dialogue of, "You're not telling me what to do." I said, "We know. We aren't telling you what to do. We don't want to tell you what to do."
Gordon Kuntz: That's right.
Carole Tremonti: But our egos about the story we're telling ourself, and that's there and it's real and it's valid, and you have to address it and acknowledge it. Because it's just an emotional conversation. While you may not have intended to say something, the person's reaction is valid. So we do, we have to pay attention to that and be aware of people's fears, and then work with it in applying decision support tools.
Gordon Kuntz: Yeah. No, that's absolutely true. It's funny because the comments that I've heard with exactly what you said, "You're taking away my decision making ability," "It's all about the art of medicine," those sorts of things, come from people who don't use pathways. I've never heard somebody who actually uses pathways on an active basis say, "Yeah, I have to do it, but it does these things." It's always the people whose institutions haven't stepped into it, or within the institution they've chosen not to fully engage. I think once they can cross that, it does open up the opportunities with them
Carole Tremonti: Absolutely. And again, back to individuals who may have used decision support tools, or those who haven't, if we think about how we're trained and just because in medicine we know there are no facts. COVID taught us that we were learning along the way of how to treat and manage the disease. But to that point, our decisions, 50% of our knowledge is based on scientific fact, 25% of it is based on where we were trained, and then the other 25% of it is based on our experience in treating patients. So in and of itself, our muscle memory is flawed to a degree, and could certainly use some support through other mechanisms. And it's not a clinical decision tool. They're clinical decision support tools. It's to help support your clinical decision making, not to make your decisions for you.
Gordon Kuntz: That's a very important distinction. So what do you see as the incentives that clinicians have to actually embrace this? There's lots of barriers, but what are the incentives that they might have to actually step over that threshold?
Carole Tremonti: Yeah. I think there's a lot. There are many, and there are a lot of really ripe opportunities. The first thing that many go to is where you have to incentivize them, and incentivize typically means financially, but throwing money at any problem never actually solves the problem. It often contributes to it. Imagine if everyone, if people paid me to do every little thing, then I wouldn't do anything unless I was paid to do it. So I think money is an option. It does help, especially if there's a barrier workload with you have to remember to use something. I think that can help in the initialization of it. But really the incentives are people are coming around to understanding and using technology. Technology hasn't really been adopted in health care. We're about 20 years behind the application of technology in other spaces. We all know that if we think about something, it suddenly shows up in Amazon or Google, in our search fields on the side.
I would say innovation. The idea that out of frustration to say, "How can we do things differently and better?" That comes because so much stuff is coming at us. Our world is so quick. There are questions that need to be answered very quickly for individuals with limited access to support. And most of all, without time. And time is so precious. That is a space in which I've observed people saying, "Can you help me do this more quickly? Can you help me decrease the amount of time I need to go find information to help me make a decision?"
Those are spaces in which decision support tools are very effective. The final is the ultimate pillar. A higher quality of care. Actually being able to measure it, to prove it, to show with these decisions, these outcomes happened. With this group of medications or these interventions, this happened. So, let's make an informed decision and let data help us drive our decision making.
Gordon Kuntz: Right. It seems, again, physicians obviously, being people have the same ... responded to the same incentives that everybody does, but they also have this very strong incentive, which got them into medicine in the first place, which is taking care of patients. Like you said at the end, that ability to identify and on many clinical decision support systems, a clinical trial that might be applicable for that patient, or a treatment that maybe they aren't very familiar with or they can treat a patient whose disease they aren't very familiar with. And the ability to continue to work with that patient. They made a relationship with in their practice through clinical decision support, which extends their knowledge and amplifies it at every level without them having to go and do the primary research themselves. I think to me that's one of the ultimate incentives that physicians really have.
Carole Tremonti: I agree with you entirely. It's also the biggest opportunity that this space has. I think a mistake that we've made that has not helped is we've said, "these are clinical decision support tools." We haven't said, "Wouldn't it be great if you walked in and had all the information in front of you that you needed to decide a treatment for a patient?" How we've introduced and how we've tried to have this conversation with clinicians, I think we've gone about it the wrong way. We haven't invited them to the table.
Gordon Kuntz: Yeah. Good point. What do you see the role of the clinic leadership in that decision to implement, and frankly, the ongoing onboarding that occurs?
Carole Tremonti: Yeah.
Gordon Kuntz: I mean, okay, so we bought the system. Great. There it sits. Nobody's using it. Now what? How do you see clinical leadership being an integral part of that?
Carole Tremonti: It's pivotal. It's everything. If you don't have leadership at the very top of the organization believing in what you're doing and endorsing it, then it doesn't work, because otherwise you're trying to use favors or relationships to get people on board. You need your chief medical officer. You need your CEO to say, "We are doing this, and this is why." And then to also say, "We're not making decisions for you at all, we're trying to help streamline your work," and you have to have that. You need your chief medical officers then to support the different areas in which you're applying decision support and employ champions everywhere you are to come along with you. To engage super users. As you're implementing these tools, bring these people with you to the decision making table, sbout, I want it to be red, not blue. The more people are involved in the process and understand the process and are educated about it and can speak to others about it, it works. But that begins at the top.
Gordon Kuntz: Yeah. Yeah. That's good. What do you see that drives clinicians away from using clinical decision support tools?
Carole Tremonti: I would say the first thing is a concern for safety. A concern for patient safety and for an error to be made. It's like if you haven't baked the cake, you might be skeptical about it because you don't know what's in it. Something to that degree. I think that is a very big thing. And that comes from two places, a lack of trust, just a lack of inherent trust, be it of the system, be it of technology, but whatever. That comes again from a lack of understanding how something works. I think that is another error in many decision support tools is that we just say, "This is a decision support tool." But there's not really a transparent understanding or educational component of, this is how it works. This is behind what you see on your screen. This is how it works.
That's something actually in my current role in my company Project Ronan is critical. We believe transparency in how AI and machine learning works is critical to the success of the application. The FDA says that about clinical decision support tools. If you're going to use them, you need to be really transparent in how they work so that people can understand it and question it. That's how things get better. Question.
Gordon Kuntz: Yeah. Yeah. Especially, it's one thing to say there's a group of your peers who are making these decisions collaboratively, it's another to say, we have this technology that you may not understand that well, that's actually accumulating this knowledge and presenting it. I think it's seen as more of a black box when it's something like AI that may not be well understood across the physician population at this point.
Carole Tremonti: I agree. I also though think in contrast, the same argument could be made about a decision made by a group of your peers. Not everyone is like ... "Well, who's the peer? What organization did they come from? Where did this ..." And again, going back to, you can only manage what you can measure, if you come at it from a perspective of data ... It came from this publication. Not, it came from this person. It came from this publication. It came because 6000 patients with the same condition had the same outcome. If you make it not about an individual, but instead about data and affect, which is the same thing a clinical trial comes from, then you're adding some credibility to your decision support tools.
Gordon Kuntz: Right. Right. What's missing from these clinical decision support tools? For example, an oncology treatment pathway platform.
Carole Tremonti: Yeah. We started pathways and I know you're an old soul in the business.
Gordon Kuntz: Oh, thanks.
Carole Tremonti: I just said, just like I am. Those of us who've been fighting the decision support in cancer pathways for 15, 20 years, we haven't really evolved too much in how we think about this. Right?
Gordon Kuntz: No we have not.
Carole Tremonti: We haven't changed. We got it done, but we haven't changed it. I think what's missing now is huge. Nothing but opportunity. There's patient context. Sure, we can say what's going to make you live the longest, what's going to have the least symptom burden in all things being equal, the third pillar, what might be most cost effective. But what we don't know are what are the orthogonal factors that may influence a decision or may influence a patient's life? And that may impact a treatment in one direction or another.
Do they have to work full time? So can they come in for an infusion three days a week? Do they have a car? Things, social determinants of health, like other components. Or if you're 70 versus 40 or are you a male, a female, like little things like that. Do you exercise a lot? Do you not exercise a lot? Do you have diabetes? Do you not have diabetes? There are a lot of other variables that aren't coming in that are huge opportunities for us, not only to support decision making, but to understanding populations,
For us to then really better understand the impact of treatment on populations because we're making decisions based on a clinical trial. Well, we all know clinical trial data is old. That clinical trial could be 5, 10, 15, 20 years old. And we're different now. Some of us are wider. Some of us are more active. Some of us are more sedentary because we've been working from home for three years. But lives change pretty rapidly so the same humans who are receiving the treatment aren't the same humans that were on the clinical trial.
Gordon Kuntz: Well and in addition, clinical trial subjects are fundamentally different, not just from a demographic standpoint, but often treated different than patients in a real world setting. Might be academic. You're getting tons of support. You've got constant monitoring. Whereas in a in more of a community setting, patients unfortunately do miss appointments, or have things that are going on in their lives outside, or miss doses of an oral medication. Yeah. That'll get you excluded from a trial. Those things are happening as opposed to reflecting the real world nature of actual cancer care.
Carole Tremonti: Yeah. And those things real. I mean, if you look at the ASCO/ONS Chemotherapy Safety Administration Standards, one of the criteria for oral chemotherapy is basically will the person take the drug. And how do you measure that? Where's that measured? And how does a clinician be position about that? I don't know, Gordon, you look like, you might take your pills every day. Sure. Let's try it out. But things like that are important and they need to come into our decisions.
Gordon Kuntz: Do you think that decision support tools powered by AI have the ability to do that? Because I think generally as I've talked to clinicians about some of those exact issues, the sentiment seems to be there comes a point where you just need to let the physician decide and they'll take all that into account. Even if they can, I think there's that whole issue of cognitive burden and just being able to. So is that one of the ways you would see CDS evolving and is that something that AI can support?
Carole Tremonti: Yeah. It's critical. It's critical and it's why AI and machine learning algorithms, this is why it's the time is right to bring them into this. So in a medical record, 80% of the information in our EMRs is unstructured. Despite all the discreet fields that are available, 80% of it is unstructured. And there's a lot of different people who document, a lot of different personas and roles, who document about patients so there's a lot of really rich information in there. AI can be used to pull this information out and be able to help use it to support our decision making. To do that requires a lot of work and very sophisticated application of natural language processing techniques and creating really robust ontologies with data models. But then in doing so, you can apply machine learning algorithms to identify patients who may have trouble swallowing pills because it might have been documented in a speech pathologist's note, or a speech therapist's note, or a social worker's note that a clinician might not have ever read or seen.
I mean, our EMRs actually put on blinders depending on your role, determines what you can actually view within the record. And to your point, it's cognitive overload. There's a lot of information. So if you could take machine learning algorithms and look at entire populations and then literally find information that you can add to the decision making pot, that's extremely powerful. That's extremely powerful. You'd be able to look at trends over time, like little subtleties or changes in maybe a patient's mood. You can even read in the provider notes, but even patient documentation and nursing documentation, you can see mood sentiment changes so you can see depression coming in. You can see anxiety coming in. All of those happen with machine learning.
Gordon Kuntz: And how much can you bring in things like patient-reported outcomes? Because a lot of this, the clinical team only knows directly about what happens when the patient is sitting in front of them. But obviously there's several days or more that pass in between those visits. And how do you gather that in an effective way to integrate that information?
Carole Tremonti: Yeah, it's the missing piece and it's actually the piece that we bring to the table at Project Ronin. It's actually why I joined Project Ronin, what I think is the most important thing. So 99% of what happens to a patient happens when they're not in front of their physician. And our approach to health care is very episodic, but it's that space in between where people get into trouble that takes them to the emergency department, or gets them into hospitalization. So we need to know what's happening to them at home. But patient-reported outcomes that they're traditionally measured are very standardized. They're very dry. They're not "what if, then," treatment. And so what the call is to use sophisticated tools. We all have a phone. We have a really sophisticated application where we ask people. We prompt them during their treatment, three times a week, however much we want of "how are you feeling? What symptoms are you experiencing?"
And then the key to engaging a patient is asking them these questions like the PRO-CTCAE criteria standardized, but in a subjective but very patient-friendly way. How are you feeling today? But then ask additional questions that clinicians would ask all the time, like you need to translate subjective to objective. "Oh, this is happening to me very frequently." Is that four times? Is that five times? In a way in which we can measure it, because then if you can measure it, then those orthogonal picture around what's happening to a patient becomes part of the decision support matrix. We're bringing in what's happening to a human and we're seeing it and we're catching signals. We associate signals along the severity of what a patient reports. So we're also finding a person who's in trouble before they're in trouble. So then we're watching trends. Then we're watching ED rates.
I think that patient-reported outcomes is the key. All a patient wants is some semblance of control. And they want to participate. So give them a seat at the table. Don't give them a seat at the table when they're in the waiting room and you ask them to fill out 20 forms, which you don't read, and you don't look at, and they aren't incorporated into your decision making. Ask them when they're at home and they're thinking a million questions.
Gordon Kuntz: Right. Right. But it also sounds like the the machine learning, the AI tools, can help to minimize the alert fatigue. Because all this data's firing in at clinicians. And that's one of the challenges I know that I've heard over the years of things like patient-reported outcomes. "We can do a dashboard and it can show you when your patient is a three or above," and it's like, oh my God. I hear from the clinicians, like, "If every one of my patients is three or above, what am I going to do about it? I can't deal with a thousand red blinking lights." I think that's actually a really powerful part of it. Isn't it? That then you can really isolate those patients that need a different level of service than they're getting. They need a physician to intervene. They need a nurse to intervene. Or they need a social worker to intervene to help keep them on their path.
Carole Tremonti: It's finding the signal out of what is otherwise a lot of data noise, and machine learning algorithms do this really, really effectively because they can look at an individual, they can actually look at groups of individuals as well, and see the difference between something that's mild versus something that is actually really important. Because it isn't just one thing. It isn't just, "I reported a pain level of four," or six as we'll call it, something higher, six. It's also, "Hey, that person's activity level has been dropping over the past two weeks."
Gordon Kuntz: Right.
Carole Tremonti: "Hey, their heart rate has been climbing which we can see on their Fitbit or their Apple Watch or their Oura Ring." it's bringing in the multiple variables, which a human cognitively, subtle nuances wouldn't pick up on, but an algorithm would. And then back to the trust and alert fatigue, when then you give that signal to a clinician, you have to show them what are the variables that came into this "ding," this like, "Hey, let me get your attention"? And these are the things that are all in the medical record and these are things that the patient's self-reporting, so that you can also trust. Then you also give the clinician opportunity to go, "I don't agree with that." "I don't agree with that." Because then the machine learns.
"Why don't you agree with it? Okay. Tell us why." And that's the human in the loop. You can't take the human out of the loop, even in a machine learning model, a machine is only as good as a machine. You need to understand how a human is cognitively taking in the information. And then critically, and I think this is something that is really missed in a lot of the conceptual frameworks of machine learning. Data today is only as good as data today. An algorithm is only as good as the way it's created today. It's got to learn. You have to constantly... We call it like sending our algorithms to medical school. You have to constantly feed it with updated data, updated source data, updated standard data, updated patient data, because patients look different. That part is actually really also critically important for AI being successful, is that these models can't be stagnant. They can never be stagnant. They always have to be evolving.
Gordon Kuntz: Yeah. So I know this is a really dated reference since it's 2022, but we've all seen 2001: A Space Odyssey, which seems really weird, because that was 21 years ago and we got HAL. So what is the risk of applying AI to clinical decision support tools? There's got to be some concern on people's part.
Carole Tremonti: Yeah. I think that there's a tremendous amount of concern, going back to the very beginning, what if they're wrong? Especially because in AI, for lack of a better term, we can look prospectively, we can look into the future. If you say this might happen to a person in the next 30 days, well, what if you're wrong? What if you miss something? That is the fear and that is the worry and it will happen. It will happen. There will be an error that will exist, but like anything just as in human error and the health care delivery, we will address it. We'll see it and we will address it. I do think another risk is we can get complacent. We, as humans may become apathetic and say, "Oh, it's going to help me. It's going to know the right thing to do all the time." So there's definitely a significant responsibility in the application of AI.
Gordon Kuntz: It almost seems like, and I hate to say this, but it's almost better if there are small, non-life threatening, obviously errors, because it helps to remind those using the systems that they still have... It's like a self-driving car. You can't sleep while the self-driving car is driving, like a train. That's not good. And in the same way, the clinician can't be asleep at the wheel entirely. They need to always be thinking critically, I would think. And understanding, as you said, where did this all come from? Why am I getting this information? Is it something I think is credible and I can rely on, because I'm assuming, as you said before, it's not making the decision. It's supporting the decision, which means the clinician can always say, you know what? I understand what I'm seeing here. And I'm choosing to not do this because of X, which then creates, goes into the feedback loop so that it does better, especially for that position next time around.
Carole Tremonti: Yeah. You've got it. You hit it right on. You have to, like I said in the beginning part of our conversation, the clinician has to be part of the process. They are part of the process. They are absolutely part of how it learns, understanding it, making sure it is right and it is working. That's how we work in our cars. If something is not working, we get a little signal, but it's up to us to take it to the mechanic. It's a similar concept here. And the more that we validate for ourselves that we self-validate and that we're exercising ourselves for our own mental health, the better the entire process I think will be, and the more accurate it will be. And something else you made a comment on, sometimes small safety errors are helpful.
I worked a long time in quality and safety and really studied a lot about high reliability and highly reliable organizations, and organizations such as that are obsessed with failure. And that is a fundamental principle that must be applied to the use of AI. I think in any domain, yes, health of course, but in any domain is, what's going to go wrong?
This might be the greatest thing in the world, but we should always all be asking ourselves and going into it, thinking the absolute worst, thinking out every the... I always call it, covering your blind side. What's the thing that's going to take you down that you never saw coming. And we should all think about that, because it's a very powerful thing and everything powerful can be wonderful, but there could be harm.
Gordon Kuntz: Yeah, absolutely. Well, Carole, thank you so much for your time. This has been wonderful. I've learned a lot as always. And this wraps up our episode of Oncology Innovations. Thank you so much for joining us today and as always thank you to The Journal of Clinical Pathways for producing this episode. Please download, rate, review, and subscribe to the podcast. For more episodes, you can visit www.journalofclinicalpathways.com or you can find us on Apple podcast, Google podcast, and Spotify. Also, be sure to share Oncology Innovations with a friend or colleague. Let me know your reactions to episodes, questions, or recommended topics for future episodes. You can find me on LinkedIn or you can send an email to gordon@gordonkuntz.com, to request specific topics and guests. Thanks again, Carole.
Carole Tremonti: Thank you.
Gordon Kuntz: See you next time.