Weijie Chen Presents AI-Powered Synthetic CT Generation at ISBI 2025

MIMRTL PhD candidate Weijie Chen recently presented his research on applying deep learning for improved attenuation correction in Positron Emission Tomography (PET) imaging. His presentation took place at the 2025 IEEE 22nd International Symposium on Biomedical Imaging (ISBI) in Houston, TX.

While PET is a powerful diagnostic tool, its quantitative accuracy depends heavily on correcting for photon attenuation – the process where photons are absorbed or scattered as they travel through the body. Traditionally, CT scans provide the necessary tissue density information to create these crucial attenuation correction (AC) maps. However, combined PET/MR scanners lack an integrated CT component, presenting a challenge for accurate AC. Addressing this challenge, Weijie’s research, conducted with MIMRTL team members James Wang and Alan McMillan, PhD, introduces an innovative deep learning method. This technique generates high-quality synthetic CT (SCT) images using only the Time-of-Flight (TOF) non-attenuation-corrected (NAC) PET data. This approach offers the potential to obtain essential AC information without needing a separate CT examination.

This work involves leveraging the effectiveness of pre-trained deep learning models. The team discovered that models initially trained on large, diverse datasets of natural images (like OpenImages) and subsequently fine-tuned with medical data, performed considerably better than models trained exclusively on smaller, specialized medical image sets. Using PET and CT data from 35 patient studies acquired at UW-Madison, they successfully fine-tuned a pre-trained model for the specific task of SCT generation from PET. The results demonstrated strong quantitative accuracy, with the best model configuration achieving a low Mean Absolute Error (MAE) of 74.49 Hounsfield Units (HU) and a high Peak Signal-to-Noise Ratio (PSNR) of 28.66 dB within the body region. Furthermore, the synthetically generated CTs displayed notably improved visualization of both bone and soft tissue structures compared to models developed without pre-training. These outcomes demonstrate the advantages of leveraging large, pre-trained models for complex medical image translation tasks.

By enabling the generation of accurate SCTs directly from PET data, the method paves the way for improved attenuation correction without requiring a CT scan. This could lead to more streamlined clinical workflows and potentially reduce patient radiation exposure by eliminating the need for CT scans performed solely for AC purposes. In particular, this type of approach is likely to be useful for PET/MRI studies where CT is not available.

Congratulations to Weijie on his successful ISBI presentation and this impactful research!

Read the full paper on arXiv: https://arxiv.org/pdf/2504.07450