Skip to main content

Advertisement

ADVERTISEMENT

Podcasts

Artificial Intelligence in Medicine, Cardiology, and Cardiac Electrophysiology

Podcast Discussion With Lior Jankelson, MD, PhD, and Sumit Chopra

Podcast discussion edited by Jodie Elrod

In this episode of The EP Edit, we are highlighting a discussion on artificial intelligence (AI) in medicine, cardiology, and cardiac electrophysiology (EP). Lior Jankelson, MD, PhD, is the director of the Inherited Arrhythmia Program, principal investigator (PI) of the Computational Cardiology Research Lab, and associate professor of medicine at NYU Langone Health. Sumit Chopra is an associate professor in the Department of Radiology at the NYU Grossman School of Medicine, and director of machine learning research in the Department of Radiology at NYU Langone Health.

This episode is also available on Spotify and Apple Podcasts!

Transcript

Lior Jankelson: Hi, I am Lior Jankelson, director of the Inherited Arrhythmia Program at NYU. I also am the PI of the Computational Cardiology Lab and associate professor of medicine. It is a great pleasure for me to have this conversation with my friend and colleague, Sumit Chopra.

Sumit Chopra: Thanks Lior. My name is Sumit Chopra and I am an associate professor at the Courant Institute of Mathematical Sciences - NYU. I also have a joint position as an associate professor in the Department of Radiology at NYU Langone Health, where I am also the director of machine learning research. My primary background and formal training has been in the area of AI, machine learning, and deep learning. More recently, about 5 years ago, I started an AI for health care company, which involved building machine learning models for reading radiology images, primarily x-rays, with the goal of providing as a spell checker to the radiologists for them to either improve their efficiency in reading or improve their accuracy. The company has taken off in terms of using this technology in real clinical settings. Last year, I joined NYU in my current role doing research in AI and health care.

Jankelson: That is terrific. We are happy to have people like you who have a deep academic background, but also commercial and business backgrounds in terms of being an entrepreneur. This is a field that is obviously keen to such transitions.

So, I think many people are familiar with or think they are familiar with the concepts of AI and deep learning, and sometimes these things collide and confuse. Can you briefly explain to our audience about AI and deep learning, and the distinction between the two?

Chopra: Absolutely. In general, AI is a term that is used quite loosely by many folks. From a technical standpoint, AI had a really specific definition, which is basically good old-fashioned AI, where people used to hand code rules to try and explain real-world phenomenon. More recently, AI is now almost like an umbrella term used for various things, like any form of data analysis to extract data patterns, or primarily, machine learning, which forms what we now call AI.

So, the terms machine learning and AI are often used interchangeably. It refers to an ability for us to teach computers how to learn patterns in the environment, so that they can use those patterns to take certain actions either themselves or through humans. To give you an example, suppose you want to build a machine or system that looks at an image of an x-ray. So you have an input image of an x-ray and then you want to infer or build a system that tells you whether there is a fracture in your wrist, for example. To teach or build such a system to identify a fracture in an x-ray image, it is important to remember that a system is actually just looking at an image, which is a 2-dimensional matrix with a bunch of numbers. It does not have this notion of what a hand, finger, wrist, or even a fracture looks like. One way to do it is to hand code the rules into the system and tell the system that a hand has 5 fingers, each finger has a certain shape, which looks like a sequence of lines that are narrow-looking towers, for example. Also, maybe have specific rules or definitions associated with what a fracture looks like, such as if there is a clear line and a break, that would be called a fracture. How thick should the break be? How dark should the break be? And so on. That is the hand-coded way of doing things. Hopefully, one may succeed in building such a system that can basically look at any image out there and identify whether it is a fracture. However, the trouble is that there are so many ways in which these images are taken and there are so many different ways that these fractures happen or could potentially happen, that it is almost impossible to hand code the rules for everything.

So that is where AI comes into the picture, where everything is data driven. So instead of hand coding the rules into the system, you have a general purpose algorithm. By algorithm, I mean a general purpose model or machine with knobs that one can tweak and what we call the parameters of your model. Then, you show this machine a large collection of images, such as x-rays of wrists, some of which will have fractures and others that will not. The goal of AI or machine learning is to adjust these knobs in a way that if the machine sees an image with a fracture, which you will know by the way because you have access to the labels or the corresponding radiology reports, the machine should provide the answer. It you keep showing the machine this sequence of images and adjusting the knobs in such a way so that each time you provide an image, it gives you the correct answer.

Jankelson: That immediately makes me think about some problems that we have with current AI. What you are describing is what is generally known as a supervised learning approach, which is almost derivative of your description. To implement that with a high degree of success with high reliability and performance, you would need a large number of examples to show the machine that you just described.

In cardiology, much like other medical fields, there are some common diagnoses such as atrial fibrillation (AF). We have millions of tracings of AF and millions of tracings that are normal sinus rhythm, and we can imagine a situation where we can teach the machine to distinguish between the two. But what happens with rare diagnoses? Does that happen in the imaging world—situations in which you do not have enough examples?

Chopra: Yes, absolutely. That is a great question. The way I describe it is exactly in the level of supervised learning, and it works under the assumption that you have a sufficient amount of what we call training data that is available for a machine to then be powerful enough to distinguish between different or new scenarios.

In the case of scenarios in which you do not have sufficient training data, then you have to be a little more creative in how you train such machines. You cannot expect this machine to have a single example of what a rare condition might look like and expect that every time it sees it, that it will give you the correct answer. One of the creative things that one could do in this scenario is data augmentation. Essentially, with a collection of rare examples, you could somehow try to tweak those examples as to what it could potentially look like if collected in a slightly different manner or from a different patient, for instance. You can simulate those things as part of your training examples to enrich it during your training and hope that the model will be able to learn out of it.

But if you do not have any examples at all, then obviously you cannot do much. There are a few creative solutions here and there on the margin that you can deal with to handle these rare diseases. But it is still an open area of research that people are exploring, and this is a situation that you find in general as well. There are rare situations for which you do not have enough training examples. Another example of a creative solution is self-supervised learning, which means instead of expecting that you have clean answers for all the possible cases, you want to retrain the machine in a manner that does not require clean labels or clean answers, or does not require any answer at all.

Jankelson: Yes, it seems like other fields, primarily language, speech recognition or translation, and also image recognition, have been having success in exploring new methods for AI, including things we call diffusion models. Can you tell us a little bit about what is going on in the outside world, outside of medicine?

Chopra: With regard to what?

Jankelson: With regard to diffusion models, for instance, and other self-supervised approaches?

Chopra: Yes, so diffusion models and self-supervised approaches are sort of different things. Diffusion models are basically a class of generative models. In AI, there are 2 major categories of models. One is what we just discussed, what we call the discriminator model, which is when you are given an input of an x-ray image and you want a yes/no answer.

The second is what we call the generative model, where you are actually generating either the distribution of your data or synthesizing the data per se. An example of a generative model in computer vision would be to suppose you have an image of scenery in wintertime, and you want to convert that scenery into a summer scene, where there are leaves on the trees. There is no notion of a yes/no answer, but you are generating the whole thing. Diffusion models are a class of models that have become quite popular. Basically, you are trying to generate new examples from the distribution in a more sane manner, so that the images are more realistic.

Self-supervised learning is answering the question that you asked, which is, what if we do not have enough labels associated with our data to train our model? Self-supervised learning is a mechanism of, without enough labels, how to leverage large or vast quantities of raw data to learn a model in a way that can be useful for eventual task. In other words, have the model infer the labels automatically. It is inspired by how human babies learn. Yes, there is some supervision involved when you show a baby an image of a cat and you tell them it is a cat. But many times, it is just the baby observing the world and based on that, inferring what might be and starting to have an association in their mind. That is the motivation with self-supervised learning.

Jankelson: Right. So, there is a large number of examples in cardiology where we do not have enough samples for certain phenomena and we would need a new type of algorithm. The classic ones, for instance, are for predicting things like sudden cardiac death (SCD), because in the general population, SCD overall is a rare condition. It is catastrophic and we try to prevent it, but it is not a common event.

For things like that, I think there is a large unmet need for some general intuition to overlook all the data, all the signal that we can get, primarily electrocardiograms (ECG), but maybe other types of data, and come up with some ability to predict, risk stratify, flag, or actively see some common features in the substrate. I am curious if you see problems like that in the imaging space.

Chopra: Yes, absolutely. I am glad you raised that point, because this is something I personally am quite interested in as a research question. It is also true in imaging, and I can give an example, but something that I believe you and I have also discussed in the past, that what happens right now as a standard of care is if a person has an ECG performed, you as a clinician is only looking at that ECG as a current snapshot. Based on that, you are inferring or making a decision, and probably you will refer to the clinical notes and past history to get an understanding. And it is not just you—humans in general do not have the capacity to remember the patterns of a person's ECG.

So this is one area where I feel like AI is like primarily suited for this. If we look at this sequence of ECGs as a collective, as almost a longitudinal time series for that patient, there has to be some signal in the pattern of the ECG that can be like an harbinger—that this person might have some medical event sometime in the future or that will at least provide some sort of a risk score. That is an area that I am quite interested in and I feel is the future of how AI will be applied within cardiology. Similar examples exist in standard medical imaging, such as for pancreatic cancer. For example, when looking at pancreatic cancer on magnetic resonance imaging (MRI), if a human is able to visually diagnose it, then yes, there is a cancer in the pancreas and it is already too late. So the question is, from a sequence of MRI images that a person has undergone based on their high-risk category, which in turn was identified by their genetic footprint, can you infer from that sequence whether the chances of the person having cancer is higher?

Jankelson: Can you find a point for intervention that is prior to an inflection point in the risk or the actual phenotype? It is an interesting idea and I think there is some evidence to that, because if you are looking at ECGs as a subject for AI, there is a phenomena that has been recorded repeatedly where examples that were adversely classified—for instance, ECGs that were predicted to represent a disease but were found to be false-positive—when you project over time and you wait a few years, the individual actually developed the condition. So, there are some features in the ECG and in the stop snapshot phenomena that captures something from the past and probably something of the trajectory in the future. I agree it is an interesting way of looking at that.

Chopra: Yes.

Jankelson: Also, I want to take advantage of your experience in the commercial side of AI, so I want to shift gears and ask you, first of all, do physicians like AI or not?

Chopra: That is an interesting question. My answer would be that opinions have been sort of drifting now. Initially, about 4-6 years ago when the applications of AI and health care came to the forefront, there was a huge pushback from the clinical community, and primarily on the radiology side, because radiology was one of the first applications where people tried to show how AI could potentially be useful. I feel that is partly the fault of AI scientists—when they started building and showcasing these models, they came up with random statements such as, "in the next 5 years, all the radiologists will be out of job because of AI," which was outrageous and completely false.

Now that I have a window to both sides of AI, I know how AI works and its strengths and weaknesses, and I also understand the challenges on the clinical side. Building an AI model or an AI-based clinical device that is used in clinical practice on a day-to-day basis is difficult. Building a machine learning model that is trained on some data within that device is a really small part of it. Making sure the device is clinically useful is the hard part.

Jankelson: Let me dive deeper into that, because I think that is interesting to our audience. For example, let's say we develop a machine that can flag x-rays with fractures that can offer an alert on the electronic health record that AI thinks there is a fracture here. What is the problem with that? Why would someone not want to use it and what is the proper way to use that?

Chopra: A couple points I want to make here. In radiology, the most common-sense way of thinking of how AI can be used is what we call computer-assisted diagnostics or a CAD device, in which a radiologist opens up an image on the computer and there is small widget within the computer software that will activate the AI algorithm. The AI algorithm will show whether there is anything wrong with the image, and if there is, where it is wrong. The trouble is that radiologists are extremely fast, they do not spend a lot of time looking at the image and coming up with the diagnosis. If they now have to look at the system or spend extra effort in engaging an AI model and its output, and then match their synthesis with an AI model synthesis to come up with an answer, multiple things happen. It is making them slower, and to an extent, it is making them biased. Because sometimes, for instance, if they say it is not a fracture and there is in fact a fracture, then there is some sort of a friction between the two. If it slows them down, the chances are that they will not want to use it, because they are already so busy and preoccupied from reading 100-200 cases per day. Adding on more work makes them more fatigued.

Jankelson: That totally makes sense. So then, what is the right way to use it?

Chopra: We do not know—that is precisely the reason why there is such a discrepancy between the research being published in AI and radiology, and these models being deployed on the AI and radiology side. There are other cases that can potentially be used to make life efficient. For instance, use of AI for automatically personalizing your handling protocols, in any form of workflow optimization, or even in diagnosis, instead of using them as a CAD device, use it like an independent second read or third read. Then, try at the back end as opposed to in front of the radiologist, with the goal of making sure that there are no false-negatives or false-positives that are put through the system.

A popular example is with breast imaging. If a radiologist sees something suspicious during a screening exam, they call the patient and have them come back for a diagnostic scan. Now, imagine that you are the patient receiving a call from the radiologist saying there is something wrong with your imaging exam. You will worry you have cancer, right? However, 80% of those recalls are normal. So it is a lot of trauma for the patient and a huge cost for the health care system. An AI model in the background might recommend that another radiologist reread that case to make sure there is nothing wrong with the patient. That is an example of a system where you can be creative in how you use AI within health care as opposed to just having an AI model in front of you.

Jankelson: That is an interesting concept. So instead of determining, we would make the AI as sort of the moderator.

Chopra: Exactly. So going back to your question, when you discuss use cases like this, then clinicians can start to think more positively about AI as a tool or technology that can help them improve and be supportive, as opposed to replacing them. Obviously, replacing them is not happening.

Jankelson: Yes, that is interesting. So another way to frame that would be that the lower hanging fruits are maybe areas where there is high variability between human reads for either imaging, or in this case, in cardiology. I can think of both ECG readings, but also echocardiography or MRIs where we know that different readers would assign a different diagnosis or characteristics to the read.

Chopra: So, I had a question in that regard. From an ECG standpoint, do you think there is enough variability?

Jankelson: Yes. In the ECG space, there is a large number of unmet needs, but I will mention just some of them. One is that the automatic decision tree diagnosis that gets spit out of the machine is often wrong. For instance, for a diagnosis of myocardial infarction (MI), a high fraction of ECGs gets flagged because of some changes in their signal that may correspond to a heart attack or MI, when it is clearly not the case. There are several examples like that. Humans can identify that very easily, and so taking that human intuition and implementing it in an automated system based on AI is probably a pretty tangible thing, and I think there are good examples for that.

Another interesting field is that the ECG is sort of a spectral phenomenon, meaning that for most situations there is no binary answer, but there is a spectrum. The spectrum depends on other independent variables such as gender, race, age, body habitus, or other comorbidities. All these things can change the ECG in a way that makes the same ECG normal in one situation and abnormal in a different situation. Now, the traditional ECG machines do not capture that spectral nature, so I think there is a good potential for the use of AI in capturing the many features that are included in the ECG. When you combine those, I think you can get much better precision in the reading of the ECG.

Chopra: I see.

Jankelson: Sumit, thank you very much for joining me today in this conversation. I feel like we could talk a lot more about that and maybe we should do that. It has been a pleasure! I am looking forward to hearing more from you and talking more with you.

Chopra: Absolutely, thank you for the invitation. It was great to have a chat with you and I am happy to chat more. I see a lot of good things happening, especially in cardiology in the future. Thanks for the interview!

© 2023 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of EP Lab Digest or HMP Global, their employees, and affiliates. 

Advertisement

Advertisement

Advertisement