Professor McMillan presents at ML4MI retreat

At the UW-Madison Machine Learning for Medical Imaging (ML4MI) retreat, themed “AI in Imaging and Medicine: Breaking Silos, Building Bridges,” Professor McMillan gave a presentation on the relevance and impact of foundation models in medical imaging AI​​. The retreat was co-organized by Associate Professor Pallavi Tiwari, PhD from the Department of Radiology and Professor Kyle Cranmer, Director of the Data Science Institute.

Professor McMillan’s presentation, titled “Why you should care about foundation models in medical imaging AI,” emphasized the importance of foundation models in addressing current challenges in medical imaging. These models, characterized by their training on vast and diverse datasets, often employ self-supervised learning and are versatile enough for in-context learning or as a basis for transfer learning. Not a new concept, but rather a maturing one, foundation models are becoming increasingly pivotal in the AI landscape​​.

Foundation modals are important because the can directly address challenges in medical imaging AI:

  1. Overcoming Data Limitations: The scarcity of healthcare data for training robust AI models is a significant hurdle. Foundation models assist in overcoming this by using pre-training and transfer learning​​.
  2. Navigating Data Silos: Patient data, often dispersed across various sources, poses integration challenges. Foundation models enable the aggregation and learning from these heterogeneous data modalities​​.
  3. Resource Efficiency: Many healthcare systems face constraints in computational resources and AI expertise. Foundation models help reduce the costs associated with training AI models​​.
  4. Enhancing Reproducibility: The large and diverse datasets used in foundation models, along with well-documented training procedures, contribute to more reproducible research outcomes. This also aids in standardization across healthcare settings​​.
  5. Potential for Multimodal Models: Foundation models open avenues for developing Generalist Medical AI (GMAI) models, capable of handling diverse tasks with minimal labeled data, and offering high-impact applications across various medical domains​​.

The presentation also addressed the ethical aspects and challenges in deploying foundation models:

  • Bias and Fairness: The risk of perpetuating biases in models trained on imperfect real-world data, leading to potential discrimination​​.
  • Explainability and Accountability: Challenges in interpreting the complex decisions of foundation models and determining accountability in cases of medical errors​​.
  • Adaptability: The need for foundation models to adapt to new data or populations and their applicability to specific problems​​.
  • Ownership and Cost: Concerns about who controls access to these models and the substantial investment required for their development​​.

Foundation models will have an important role in medical imaging AI serving as baseline applications, and as potential building blocks in multi-agent ensemble models. The talk concluded with a call to action included included the need for increased collaboration, participation in open source initiatives, maintaining scientific rigor, and advocating for fairness and explainability in these technologies​​.

More information about ML4MI can be found here:

More information about the UW-Madison Data Science Institute can be found here: