ADVERTISEMENT
Advanced Multimodality Fusion Imaging
8.3 / IAGS 2019
Session 8: Emerging Therapies Session 1
Advanced Multimodality Fusion Imaging
Problem Presenter: Anthony Medigo, MD
Current status and statement of the problem
Over the last decade, the complexity of interventional cardiac and vascular cases has been steadily increasing. The need for better diagnostic imaging has been growing in parallel in response to new approaches to treat structural heart disease, coronary chronic total occlusions, aortic abdominal aneurysms using fenestrated grafts for EVAR, and critical limb ischemia. These new and improved imaging tests are essential for diagnosis, but can also be very impactful for therapy planning and correlative image guidance. Fusion of different modalities (CT, MR, PET, US) intraprocedurally could possibly improve precision of device placement; reduce radiation, and reduce contrast volume and procedure time. Even with the amount of imaging that is currently being performed, many challenges remain that inhibit adoption of these image-fusion techniques.
Gaps in knowledge
There are both technical and clinical challenges that impact the adoption of image-fusion guidance, but these can be overcome. Technically, some of the tools are not as robust. Image artifacts, anatomical variabilities, inconsistency between modalities and time variation all prove to be challenges. Accuracy may be sacrificed due to patient positioning, instrumentation of vessels, and motion artifact of certain organs. Additionally, a fair amount of manual manipulation of the image data is still needed, and cross vendor compatibility remains an impediment. Clinically, when first adopting image-fusion techniques there can be discouraging moments. Initially, the process can be time consuming during a case as it requires a dedicated, skilled technologist and/or trained physician. Training opportunities at times can be limited and procedural support for these advanced applications has to be considered. To further complicate the issue, access to these image-fusion applications can be difficult due to a disconnect between the purchasing cycle of hardware and the speed at which software applications are developed. If the applications were not available at the time of purchase, it often becomes financially challenging for institutions to find additional unbudgeted capital to purchase these upgrades.
Therefore, advanced and efficient training, improved automation, and new economic models will be needed to increase adoption of fusion imaging for therapeutic purposes. Lastly, instead of having to fuse the complete multimodality datasets, it would be optimal to isolate the best parts of the imaging dataset of one modality, combine it with best of another, and produce a completely new way of seeing the data. In essence, have 1+1=3.
Possible solutions and future directions
The next generation of fusion imaging will be completely based on artificial intelligence, even well beyond what is performed now. The need for an automated plug-and-play program will be necessary. This will bridge the gap between diagnosis, planning and guidance that “Just Works.” The moment that the patient receives a diagnostic imaging test, these data will be sent to a PACs system, automatically segmented per organ, automated measurements provided, and the data will then appear in the procedure room. These data will be fused and registered without any manipulation or human interaction from the moment of image acquisition.(1) This will further be enhanced by 3D body surface recognition cameras within the diagnostic and procedural rooms. Once the patient’s body surface or avatar is determined in relation to their organs from imaging scans, the ability to locate, register, and guide intervention in an augmented and virtual way becomes possible.(2) This will open up a completely innovative and predictive way to treat different diseases and provide true precision medicine. We are still in the early stages of what is possible for fusion imaging and therapy guidance. As technology and deep learning programs develop, physician satisfaction, and patient safety will ultimately benefit.
References
- Liao, et al. “An Artificial Agent for Robust Image Registration,” Thirty-First AAAI Conference on Artificial Intelligence, 2017, oral presentation, Feb 4th, San Francisco, CA.
- Singh V, Ma K, Tamersoy B, Chang YJ, Wimmer A, O’Donnell T, Chen T. Darwin. Deformable patient avatar representation with deep image network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham. 2017 Sep 10: pp. 497-504.