Main Session
Sep 28
PQA 01 - Radiation and Cancer Physics, Sarcoma and Cutaneous Tumors

2120 - Leader-Guided Deep Learning Framework for Adaptive Target Segmentation in Radiotherapy

02:30pm - 04:00pm PT
Hall F
Screen: 26
POSTER

Presenter(s)

Qingying Wang, MS Headshot
Qingying Wang, MS - University of Texas Southwestern Medical Center, Dallas, TX

M. Kazemimoghadam1, Q. Wang1, M. Chen1, X. Gu2, and W. Lu1; 1Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 2Stanford University Department of Radiation Oncology, Palo Alto, CA

Purpose/Objective(s): This study introduces a novel template-guided deep learning framework for primary gross tumor volume (GTVp) segmentation, addressing challenges posed by diverse tumor types in radiotherapy. The model employs a "Follow-the-Leader" learning approach, leveraging template guidance to enhance segmentation accuracy and adaptability across different anatomical and tumor variations. This enables a universal model capable of handling a wide range of tumor presentations in various clinical scenarios.

Materials/Methods: The model, based on a 3D nnUNet architecture, incorporates three input channels—primary CT image, template CT image, and template mask—enabling a “Follow-the-Leader” learning approach that adapts segmentations to the selected template (leader). Validated using the RADCURE dataset of 3346 head and neck (H&N) cancer patients, the framework addressed 91 distinct categories (Table 1) derived from combinations of patient sex, tumor laterality, main site, subsite, and tumor stage. Representative templates were selected from each category, and the model was trained using primary CT images alongside corresponding templates. Data allocation included up to 10 cases per category for training and validation, with additional cases reserved for testing.

Results: Compared to conventional 3D nnUNet model for CT-based GTVp segmentation, the proposed approach significantly improved Dice Similarity Coefficient (DSC), Average Surface Distance (ASD), and 95th Percentile Hausdroff Distance (HD95) across all H&N tumor sites. For larynx (269 cases), DSC for T3 increased from 0.24 to 0.61, ASD from 9.00 mm to 2.42 mm, and HD95 from 20.47 mm to 6.80 mm. For hypopharynx (14 cases), T2 DSC improved from 0.34 to 0.58, ASD from 9.31 mm to 2.60 mm, and HD95 from 14.42 mm to 8.12 mm. In oropharynx (261 cases), the model outperformed the baseline for lower-stage tumors (T1 and T2) across all categories, the proposed model improved DSC for T1 (0.61?0.75), ASD (3.15 mm?2.50 mm), and HD95 (10.15 mm?6.85 mm) and for T2, DSC (0.65?0.78), ASD (3.42 mm?2.64 mm), and HD95 (9.74 mm?7.20 mm).

Conclusion: The novel template-guided deep learning framework effectively adapts to diverse tumor types by tailoring segmentation to selected templates, enhancing delineation accuracy, as demonstrated with a comprehensive H&N cancer dataset.

Abstract 2120 - Table 1: Components of H&N dataset categories used for template-guided segmentation

Components of H&N dataset categories

Gender

Male (M), Female (F)

Tumor laterality

Center (C), Left (L), and Right (R).

Tumor main site

Larynx, Oropharynx, Hypopharynx, Nasopharynx, and Esophagus

Tumor subsite

Larynx: Glottis and Supra glottis

Oropharynx: Base of tongue, Tonsillar fossa, Tonsil, Soft palate, and Lat-wall

Hypopharynx: Pyriform sinus

Nasopharynx: Post-wall and Sup-wall

Esophagus: Cervical esophagus

Tumor stage

T1(T1a, T1b), T2, T3, T4(T4a, and T4b)