A tandem reinforcement learning framework for localized prostate cancer treatment planning and machine parameter optimization.
Volumetric modulated arc therapy (VMAT) machine parameter optimization (MPO) is a complex, high-dimensional problem typically solved with inverse planning solutions that are both temporally and computationally expensive. While machine learning techniques have been explored to automate this process, they often supplement rather than replace conventional optimizers and are fundamentally limited by the quality and diversity of training data. Reinforcement learning (RL) offers a promising alternative, finding optimal strategies through trial-and-error by maximizing a narrowly tailored reward function, which can potentially discover novel solutions beyond mimicking features present in existing plans.
The purpose of this study was to develop and validate a deep reinforcement learning-based VMAT MPO algorithm capable of automatically generating clinically comparable treatment plans for prostate cancer that meet machine constraints, entirely independent of a commercial treatment planning system (TPS) optimizer.
A dataset comprised of 100 prostate cancer patients planned using the criteria from PACE-B SBRT arm serve as the basis for network training using a 70-10-20 training/validation/testing split. An RL framework using a Proximal Policy Optimization (PPO) algorithm was developed to train two tandem convolutional neural networks that sequentially optimize multi-leaf collimator (MLC) positions and monitor units (MUs) using current dose, contoured structure masks, and current machine parameters as inputs. Training was designed to predict MLC positions and MUs that maximize a dose-volume histogram (DVH)-based reward function tailored to prioritize meeting clinical objectives. The fully trained networks were executed on a test set of 20 patients and compared to reference plans optimized with a commercial TPS.
The RL algorithm generated plans in an average of 6.3 ± 4.7 s. Compared to the reference plans, the RL-generated plans demonstrated improved sparing for both the bladder and rectum across their respective dosimetric endpoints. When normalizing to 95% coverage, the RL generated plans resulted in a statistically significant increase in the PTV D 2 % ${{D}_{2\% }}$ , while achieving a significantly reduced D m e a n ${{D}_{mean}}$ for the rectum. All RL plans successfully satisfied all clinical objectives used to optimize the reference plans.
We successfully developed and validated a deep RL framework for VMAT MPO. The algorithm rapidly generates VMAT prostate cancer treatment plans that meet clinical constraints and are dosimetrically comparable to manually optimized plans without the use of a commercial TPS optimizer. This work demonstrates the feasibility of RL as a tool to fully automate the VMAT planning process, offering the potential to decrease planning times while maintaining plan quality.
The purpose of this study was to develop and validate a deep reinforcement learning-based VMAT MPO algorithm capable of automatically generating clinically comparable treatment plans for prostate cancer that meet machine constraints, entirely independent of a commercial treatment planning system (TPS) optimizer.
A dataset comprised of 100 prostate cancer patients planned using the criteria from PACE-B SBRT arm serve as the basis for network training using a 70-10-20 training/validation/testing split. An RL framework using a Proximal Policy Optimization (PPO) algorithm was developed to train two tandem convolutional neural networks that sequentially optimize multi-leaf collimator (MLC) positions and monitor units (MUs) using current dose, contoured structure masks, and current machine parameters as inputs. Training was designed to predict MLC positions and MUs that maximize a dose-volume histogram (DVH)-based reward function tailored to prioritize meeting clinical objectives. The fully trained networks were executed on a test set of 20 patients and compared to reference plans optimized with a commercial TPS.
The RL algorithm generated plans in an average of 6.3 ± 4.7 s. Compared to the reference plans, the RL-generated plans demonstrated improved sparing for both the bladder and rectum across their respective dosimetric endpoints. When normalizing to 95% coverage, the RL generated plans resulted in a statistically significant increase in the PTV
We successfully developed and validated a deep RL framework for VMAT MPO. The algorithm rapidly generates VMAT prostate cancer treatment plans that meet clinical constraints and are dosimetrically comparable to manually optimized plans without the use of a commercial TPS optimizer. This work demonstrates the feasibility of RL as a tool to fully automate the VMAT planning process, offering the potential to decrease planning times while maintaining plan quality.