MIMRTL @ SIIM25

(Portland, Oregon) The Molecular Imaging / Magnetic Resonance Technology Lab (MIMRTL) at the University of Wisconsin-Madison made significant contributions to the Society for Imaging Informatics in Medicine (SIIM) 2025 conference, showcasing cutting-edge research that leveraged the power of foundation model embeddings for diverse medical imaging applications. Our team was excited to share insights and advancements that promise to shape the future of diagnostic accuracy and workflow efficiency in radiology.

Leadership and Engagement: Dr. Alan McMillan at SIIM 2025
We were proud to highlight the substantial involvement of Dr. Alan McMillan, Professor in the Department of Radiology and Director of MIMRTL, in this year’s conference. Dr. McMillan served as an Organizing Committee Member for SIIM 2025 and held the role of Co-chair for the Scientific Program Committee, playing a pivotal part in curating the high-quality scientific content presented.

Furthermore, Dr. McMillan guided discussions on cutting-edge research as the Moderator for Session #2015: Breakthroughs in Disease Detection: From Imaging Biomarkers to Automated Diagnostics Scientific Research Abstracts. This session, which took place on Wednesday, May 21, 2025, in Room A105, explored innovative approaches that are transforming how diseases are identified and managed through imaging.

Empowering Imaging Informaticists: A Deep Dive into Data Pipelines

MIMRTL was also committed to advancing practical knowledge within the imaging informatics community. Dr. McMillan and our team were pleased to offer Learning Lab LL2006: How to Set Up Your Data Pipeline, an interactive, in-person session held on Wednesday, May 21, 2025, from 10:00 AM to 12:00 PM PT.

In today’s data-centric healthcare landscape, a robust and scalable data pipeline is paramount for optimizing imaging informatics workflows. This two-hour learning lab was designed to equip healthcare imaging informaticists with the essential knowledge and practical skills to design, implement, and manage data pipelines tailored to their specific organizational needs. Participants gained hands-on experience, as we had a lot of fun diving into the essentials of designing, implementing, and managing data pipelines with examples covering critical areas such as DICOM anonymization and the deployment of AI for chest radiograph AI inference. We were pleased with the high engagement and skill development fostered in this crucial domain.

Leveraging Embeddings for Enhanced Classification

A core focus of MIMRTL’s recent work, spearheaded by Xue Li, a talented PhD candidate in the Department of Electrical and Computer Engineering under Dr. McMillan’s supervision, involved harnessing foundation model embeddings for complex medical imaging tasks. Ms. Li presented two significant oral presentations detailing these innovative approaches.

Accurate and Efficient Glioma Classification with Volume Embeddings

On Wednesday, May 21, 2025, during Session #2015, Xue Li presented “Leveraging Volume Embeddings for Accurate and Efficient Glioma Classification.” Foundation models have demonstrated remarkable success in generating embeddings for various downstream tasks, particularly with 2D images. This research extended this paradigm to 3D datasets, exploring multiple strategies to construct volume embeddings from brain MRI scans for the classification of low-grade glioma (LGG) versus high-grade glioma (HGG).

The study hypothesized that volume embeddings could offer an effective and computationally efficient method for this critical differentiation. Utilizing the BraTS 2020 dataset, which included 369 subjects with four MRI series (T1, T2, T1CE, and FLAIR), slice-level embeddings were first extracted using the MedImageInsight foundation model. These were then aggregated into volume embeddings via mean, max, and median pooling methods. A Multi-Layer Perceptron (MLP) classifier was trained on these embeddings.

The results were highly promising, demonstrating that volume embeddings can accurately distinguish LGG from HGG, with all performance metrics, including accuracy, precision, recall, and F1-score, surpassing 91%. Notably, the highest accuracy of 96.8% (and an F1-score of 96.8%) was achieved using max pooling on T1CE+FLAIR embeddings. While state-of-the-art models trained directly on images achieved slightly higher results (ResNet-50 at 98.9% accuracy), the embedding-based model’s performance, lagging by only 2.1%, underscored the exciting promise for the future development of embedding-based methods as an efficient and powerful alternative for complex classification tasks in neuro-oncology. This work highlighted a significant step towards more streamlined and accessible AI tools for glioma grading.

Advancing Radiography Classification with Foundation Model Embeddings and Adapters

Continuing this innovative trajectory, Xue Li delivered a second oral presentation on Thursday, May 22, 2025, in Session #3028: Charting New Frontiers: Innovative Technologies in Imaging Informatics Scientific Research Abstracts. This presentation, titled “Leveraging Foundation Model Embeddings in Adapter Training for Radiography Classification,” explored the utility of embeddings from both general-purpose and medical domain-specific foundation models for training lightweight adapter models. The focus was on the multi-class classification of radiography images, specifically for assessing tube placement, a common and critical task impacting diagnostic accuracy and patient safety.

The research hypothesized that foundation model embeddings could effectively train these adapter models. The study utilized 8,842 radiographs labeled across seven categories. Embeddings were extracted using six different foundation models: DenseNet121, BiomedCLIP, Med-Flamingo, MedImageInsight, Rad-DINO, and CXR-Foundation model. Various traditional machine learning models, including K-Nearest Neighbors (KNN), logistic regression (LR), Support Vector Machines (SVM), random forest (RF), and Multi-Layer Perceptron (MLP), were then trained as adapters.

The findings revealed that embeddings from models like MedImageInsight can train highly effective and lightweight adapter models. MedImageInsight, when paired with an SVM adapter, achieved the highest mean area under the curve (mAUC) of 93.85%. Other models like Rad-DINO and CXR-Foundation also demonstrated strong performance. Crucially, the training and inference times for most adapters were within seconds, highlighting the computational efficiency of this approach. This research paves the way for the practical deployment of accurate and efficient AI tools in routine radiography workflows, potentially enhancing diagnostic speed and precision.