MIMRTL Presents at SIIM 2024!

The MIMRTL team from the University of Wisconsin School of Medicine and Public Health Department of Radiology showcased their latest research at the 2024 Society for Imaging Informatics in Medicine (SIIM) meeting, held in Washington, DC, from June 27-29. The team presented their advancements through both poster and oral presentations.

A poster presentation titled “Evaluation of GPT-4 Vision Model for Zero-Shot Body Region Identification for Radiography Studies” was led by Xue Li, a PhD candidate, with contributions from collaborators at Flywheel, including Mehul Sampat, PhD, Nicolas Pannetier, PhD, Travis Richardson, and from the University of Wisconsin: John Garrett, PhD, Richard Bruce, MD, and Alan McMillan, PhD. The poster detailed the application of the GPT-4 Vision Model in identifying body regions in radiographic images without prior training on specific datasets, demonstrating promising results in zero-shot learning scenarios. This approach has the potential to streamline automation in radiographic analysis pipelines, significantly reducing the need for extensive annotated datasets.

In an oral presentation titled “Enhancing Medical Imaging Interpretation Through Visual Grounding in Language-Vision Models,” Xue Li, Richard Bruce, MD, John Garrett, PhD, Tyler Bradshaw, PhD, and Alan McMillan, PhD, explored how integrating visual grounding into language-vision models can significantly improve the interpretation of medical images. The team highlighted artificial intelligence’s capability to contextualize visual data with grounded prompts, greatly enhancing the ability to identify various anatomical regions. The continued application and development of multi-modal language models are poised to revolutionize how medical professionals interact with and interpret imaging data.

Another oral presentation, “Using Self-Supervised Anatomical Region Identification for Deep Learning Whole Body CT Segmentation,” was delivered by Michael Fei, a medical student from Creighton University School of Medicine, and Alan McMillan, PhD. This research focused on developing a self-supervised approach to identify anatomical regions in whole-body CT scans, facilitating more accurate and automated segmentation. The technique leverages deep learning algorithms to improve the precision of anatomical delineation by creating smaller, more anatomically focused models, which delivered improved results.

Congratulations to MIMRTL team members and collaborators on these contributions!