Main Session
Sep 28
PQA 01 - Radiation and Cancer Physics, Sarcoma and Cutaneous Tumors

2112 - Comparative Evaluation of Deep Learning Models for Pancreatic Cancer Tumor Segmentation and Motion Tracking in 4D CT Images: Toward Enhanced Radiotherapy Precision

02:30pm - 04:00pm PT
Hall F
Screen: 26
POSTER

Presenter(s)

Zhe Ji, MD - Peking University Third Hospital, Beijing, Beijing

Z. Ji1, Y. Jiang1, H. Sun1, B. Qiu1, Y. Chen Sr2, and J. Wang1; 1Department of Radiation Oncology, Peking University Third Hospital, Beijing, China, 2Peking University Third Hospital, Beijing, China

Purpose/Objective(s): Precise tumor motion tracking presents unique challenges in pancreatic cancer radiotherapy due to the complex interplay of respiratory movements and gastrointestinal motility. The pancreas experiences significant displacement from both diaphragmatic motion during breathing cycles and irregular peristatic movements of adjacent digestive organs, making accurate dose delivery particularly challenging. This study evaluates five state-of-the-art deep learning models for automatic segmentation to enhance real-time tumor motion tracking, specifically focusing on their capability to handle these complex, multisource motion patterns.

Materials/Methods: Five deep learning models—DLU-Net, U-Net++, DeepLabV3+, Spiral-ResUNet, and ConDSeg—were compared for their performance in medical image segmentation. The models were trained and validated on a dataset of 4D CT images from 50 pancreatic cancer patients, with tumor regions annotated across multiple respiratory phases. Segmentation accuracy and motion tracking capabilities were assessed using the Dice similarity coefficient (DSC), Hausdorff distance (HD), and center-of-mass error (COME). Computational efficiency was also evaluated to ensure suitability for real-time applications.

Results: Among the five models, ConDSeg emerged as the best-performing model, achieving a DSC of 0.76 ± 0.02, an HD of 3.0 ± 0.7 mm, and a COME of 1.7 ± 0.4 mm. U-Net achieved a DSC of 0.74 ± 0.03, an HD of 3.2 ± 0.8 mm, and a COME of 1.8 ± 0.5 mm. U-Net++ demonstrated a DSC of 0.69 ± 0.02, with similar HD and COME metrics. DeepLabV3+ excelled in multi-scale feature extraction, achieving a DSC of 0.71 ± 0.04, while Spiral-ResUNet showed competitive performance with a DSC of 0.73 ± 0.03. All models exhibited consistent performance across respiratory phases, with an average tracking error of less than 2 mm, within clinically acceptable limits. Computational efficiency was robust, with U-Net and U-Net++ processing frames in 0.8 seconds, while DeepLabV3+, Spiral-ResUNet, and ConDSeg averaged 1.0 seconds per frame, still suitable for real-time applications.

Conclusion: This study demonstrates that deep learning models, particularly ConDSeg, significantly enhance real-time tumor motion tracking for pancreatic cancer radiotherapy. ConDSeg outperformed the other models in segmentation accuracy and motion tracking, achieving the highest DSC and lowest tracking errors. All models exhibited consistent performance across respiratory phases and demonstrated computational efficiency suitable for real-time applications.Future work will focus on clinical validation and integration into radiotherapy workflows.