Main Session
Sep 28
PQA 01 - Radiation and Cancer Physics, Sarcoma and Cutaneous Tumors

2306 - Prior Knowledge-Guided Deep Learning Segmentation Algorithm for Online Adaptive Radiotherapy in Nasopharyngeal Carcinoma

02:30pm - 04:00pm PT
Hall F
Screen: 29
POSTER

Presenter(s)

Guan-Qun Zhou, MD, PhD - Sun Yat-Sen University Cancer Center, Guang Dong Province, Guangdong

G. Q. Zhou1, L. Jia2, H. Li2, Y. Liu2, R. Guo1, X. Yu3, X. Yang1, G. Y. Wang1, and Y. Sun4; 1Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China, 2Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China, 3Sun Yat-sen Memorial Hospital, Guangzhou, China, 4State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Department of Radiation Oncology, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, China

Purpose/Objective(s): The clinical implementation of online adaptive radiotherapy (ART) for Nasopharyngeal carcinoma (NPC) remains challenging owing to strict time constraints for contour modification. To address these limitations, this study proposes a prior knowledge-guided deep learning segmentation algorithm tailored for NPC online ART.

Materials/Methods: The algorithm integrated anatomical prior knowledge into a U-Net based network. This design enhanced segmentation accuracy while ensuring computational efficiency. For Organ-at-Risk (OARs) and different targets, we introduced different prior knowledge for the network. A total of 159 patients were collected for network training. Each patient had original planning Computed Tomography (CT), ART CT and corresponding contours. The training validation and test dataset was 95, 24 and 40. For OARs, we used multi-classes UNet (MCU-Net) to reduce the operating time. Contouring groups were established based on the anatomical relationships between OARs, allowing the model to generate multiple OARs through one prediction. For retropharyngeal lymph nodes (GTVp) and cervical lymph nodes (GTVn), we used prior contouring information guided UNet (PPIU-Net). The contour of GTVp and GTVn from the original plan was copied to the ART CT after rigid registration, providing positional information for network. The input channel was two for PPIU-Net, one for CT and the other for positional information. As for high-risk clinical target volume (CTV1), and low-risk clinical target volume (CTV2), we used the prior anatomical knowledge guided UNet (PAKU-Net). It was designed to predict CTV1 and CTV2 with the guidance of GTVp and other prior information, including the spatial relationships between the targets and OARs. The details were shown in Table 1. The inference time and Dice similarity coefficient (DSC) were used for evaluation. The nnU-Net was used for comparison.

Results: The inference time was shortened to nearly 1 minute 30 seconds from nnU-Net, which was 2 minutes 45 seconds. The DSC of OARs and targets had a little improvement from nnU-Net to MCU-Net, which was 0.90 and 0.91. For targets, the GTVp, CTV1, CTV2 and GTVn was improved by using PAKU-Net and PPIU-Net, which was 0.92, 0.91, 0.92 and 0.79. Compared with nnU-Net, it was 0.75,0.86, 0.89 and 0.72.

Conclusion: This comprehensive solution enhances both automation and precision in target/OAR delineation, effectively bridging the technical gap in NPC online ART implementation. The proposed framework demonstrates strong potential for facilitating clinical adoption of real-time adaptive radiotherapy in nasopharyngeal carcinoma management.

Abstract 2306 - Table 1: The details of networks

Network

nnU-Net

MCU-Net

PPIU-Net

PAKU-Net

Region of interest

OAR or target

OARs

GTVp/GTVn

CTV1/CTV2

Input channel

CT

CT

CT and contour copy from original plan

CT and GTVp

Output channel

Single

Multi

Multi

Multi