MIMRTL Presents at CMIMI 2023

The MIMRTL team presented research at the 2023 Conference on Machine Intelligence in Medical Imaging (CMIMI), held on October 1-2, 2023, at Johns Hopkins University in Baltimore, Maryland. This event was organized by the Society for Imaging Informatics in Medicine (SIIM), a professional scientific society for imaging informatics.

Hybrid Model for Synthetic CT Generation from TOF NAC PET Scans

Professor Alan McMillan presented a hybrid model for creating sCT images from Time-Of-Flight Non-Attenuation Corrected (TOF NAC) PET scans. This model integrates the SwinUNETR and UNET architectures, each excelling in capturing different anatomical regions – SwinUNETR for bones and UNET for soft tissues. Study coauthors included MIMRTL alum Iman Z. Estakhraji, PhD and collaborators Tyler Bradshaw, PhD and Ali Pirasteh, MD.

The aim was to improve accuracy in clinical applications like radiation therapy planning and anatomical mapping. The results showed accuracy with less than 5% error and Dice scores above 0.9, indicating a close resemblance to ground truth CT scans. This hybrid model marks a significant stride in medical imaging. Its ability to accurately synthesize sCT images implies a potential reduction in the need for additional CT scans, thus streamlining the workflow in clinical practices.

Leveraging 3D Segmentation Datasets for Efficient 2D Body Region Classification

Xue Li, MS, from MIMRTL, presented another notable abstract on the efficient adaptation of comprehensive 3D whole-body segmentation datasets for 2D body region classification. This approach addresses the need for rapid and resource-efficient identification of body regions in medical imaging, crucial for quality assurance and selecting appropriate algorithms. Co-authors included Professor McMillan, Flywheel collaborators Nicolas Pannetier, PhD, Mehul Sampat, PhD, Travis Richardson, and UW collaborators Richard Bruce, MD and John W. Garrett, PhD.

The study utilized the a whole body CT segmentation dataset to train a 3-layer U-Net model. The model demonstrated robust performance with high accuracy and F1 scores, all while running on minimal resources. This approach demonstrates how 3D segmentation datasets can be converted into efficient 2D models, significantly enhancing throughput capabilities on low-resource hardware. This innovation has the potential to improve medical imaging workflows, making them more efficient and accessible.