Skip to main content

Advertisement

ADVERTISEMENT

Videos

Applying Artificial Intelligence to Clinical Practice: Is Explainability Necessary?

Interview With Mintu Turakhia, MD, MAS

In this onsite interview at the Western Atrial Fibrillation (AFib) Symposium, we talk with Dr Mintu Turakhia, who is a cardiac electrophysiologist and professor of medicine at Stanford Medicine.

What can you tell us about your presentation at Western AFib? 

The topic was explainability in artificial intelligence (AI) and is it really necessary in clinical medicine. The pretext is that if we think of AI and how we apply it in our field, a lot of it is based on pattern recognition. When we find things such as looking at an electrocardiogram (ECG) and looking for diagnoses within there that are arrhythmia specific, or finding more nuanced things that humans generally cannot directly see, such as risk of heart failure or risk of cardiomyopathy. When you do that with modern algorithms for AI, you create black boxes, so you do not know what is feeding it. Therefore, the purpose of the talk was to discuss the pros/cons and approaches for making that black box a little more transparent and explainable.

What are some of the take-home messages you would like viewers to leave with? 

Explainable AI is a very important part of artificial intelligence and healthcare. But I do not think it is necessary all the time. There are some things where it can help a physician improve workflow, speed up a lot of tasks, arrive at prediagnoses such as reading ECGs and having rhythms classified pending their review and approval. You do not need explainability there—it does its job. In fact, much of the AI out there can outperform the average physician in many, many studies at this point. Where you need explainability is when it is highly consequential to a patient. So when it is going to say that is a cancerous skin region, or you have stage IV cancer, or your estimated mortality is that your 30-day, 60-day, or 10-year mortality is X. Now you are starting to make very consequential decisions and you are going to need to know why—a patient is going to want to know why you arrived at that decision. So that is where it becomes much more important, and if and until we move to fully autonomous AI, where a clinician is out of the loop, that is going to need explainability as well.

What is your favorite part about Western AFib?

This is a fantastic meeting, because it is in a great location with all your friends and colleagues and so many experts, so you cannot learn more in less time anywhere else. It is a group of colleagues who are all invested in patient outcomes, and a lot of work here ranges from recapping the most recent literature over the last year or so, updates in professional society guidelines, all the way to cutting-edge, hypothesis-generating experiments. So, you need that entire spectrum of evidence to again continue to contemporize all of us in delivering care.


Advertisement

Advertisement

Advertisement