Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Med Phys ; 51(6): 4271-4282, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38507259

RESUMO

BACKGROUND: In radiotherapy, real-time tumor tracking can verify tumor position during beam delivery, guide the radiation beam to target the tumor, and reduce the chance of a geometric miss. Markerless kV x-ray image-based tumor tracking is challenging due to the low tumor visibility caused by tumor-obscuring structures. Developing a new method to enhance tumor visibility for real-time tumor tracking is essential. PURPOSE: To introduce a novel method for markerless kV image-based tracking of lung tumors via deep learning-based target decomposition. METHODS: We utilized a conditional Generative Adversarial Network (cGAN), known as Pix2Pix, to build a patient-specific model and generate the synthetic decomposed target image (sDTI) to enhance tumor visibility on the real-time kV projection images acquired by the onboard kV imager equipped on modern linear accelerators. We used 4DCT simulation images to generate the digitally reconstructed radiograph (DRR) and DTI image pairs for model training. We augmented the training dataset by randomly shifting the 4DCT in the superior-inferior, anterior-posterior, and left-right directions during the DRR and DTI generation process. We performed real-time 2D tumor tracking via template matching between the DTI generated from the CT simulation and the sDTI generated from the real-time kV projection images. We validated the proposed method using nine patients' datasets with implanted beacons near the tumor. RESULTS: The sDTI can effectively improve the image contrast around the lung tumors on the kV projection images for the nine patients. With the beacon motion as ground truth, the tracking errors were on average 0.8 ± 0.7 mm in the superior-inferior (SI) direction and 0.9 ± 0.8 mm in the in-plane left-right (IPLR) direction. The percentage of successful tracking, defined as a tracking error less than 2 mm in the SI direction, is 92.2% on the 4312 tested images. The patient-specific model took approximately 12 h to train. During testing, it took approximately 35 ms to generate one sDTI, and 13 ms to perform the tumor tracking using template matching. CONCLUSIONS: Our method offers the potential solution for nearly real-time markerless lung tumor tracking. It achieved a high level of accuracy and an impressive tracking rate. Further development of 3D lung tumor tracking is warranted.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada Quadridimensional , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Radioterapia Guiada por Imagem , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/diagnóstico por imagem , Humanos , Radioterapia Guiada por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada Quadridimensional/métodos
2.
Phys Med Biol ; 69(4)2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38241714

RESUMO

Objective.We report on paraspinal motion and the clinical implementation of our proprietary software that leverages Varian's intrafraction motion review (IMR) capability for quantitative tracking of the spine during paraspinal SBRT. The work is based on our prior development and analysis on phantoms.Approach.To address complexities in patient anatomy, digitally reconstructed radiographs (DRR's) that highlight only the spine or hardware were constructed as tracking reference. Moreover, a high-pass filter and first-pass coarse search were implemented to enhance registration accuracy and stability. For evaluation, 84 paraspinal SBRT patients with sites spanning across the entire vertebral column were enrolled with prescriptions ranging from 24 to 40 Gy in one to five fractions. Treatments were planned and delivered with 9 IMRT beams roughly equally distributed posteriorly. IMR was triggered every 200 or 500 MU for each beam. During treatment, the software grabbed the IMR image, registered it with the corresponding DRR, and displayed the motion result in near real-time on auto-pilot mode. Four independent experts completed offline manual registrations as ground truth for tracking accuracy evaluation.Main results.Our software detected ≥1.5 mm and ≥2 mm motions among 17.1% and 6.6% of 1371 patient images, respectively, in either lateral or longitudinal direction. In the validation set of 637 patient images, 91.9% of the tracking errors compared to manual registration fell within ±0.5 mm in either direction. Given a motion threshold of 2 mm, the software accomplished a 98.7% specificity and a 93.9% sensitivity in deciding whether to interrupt treatment for patient re-setup.Significance.Significant intrafractional motion exists in certain paraspinal SBRT patients, supporting the need for quantitative motion monitoring during treatment. Our improved software achieves high motion tracking accuracy clinically and provides reliable guidance for treatment intervention. It offers a practical solution to ensure accurate delivery of paraspinal SBRT on a conventional Linac platform.


Assuntos
Radiocirurgia , Humanos , Radiocirurgia/métodos , Software , Movimento (Física) , Planejamento da Radioterapia Assistida por Computador
3.
Med Phys ; 51(3): 1974-1984, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37708440

RESUMO

BACKGROUND: An automated, accurate, and efficient lung four-dimensional computed tomography (4DCT) image registration method is clinically important to quantify respiratory motion for optimal motion management. PURPOSE: The purpose of this work is to develop a weakly supervised deep learning method for 4DCT lung deformable image registration (DIR). METHODS: The landmark-driven cycle network is proposed as a deep learning platform that performs DIR of individual phase datasets in a simulation 4DCT. This proposed network comprises a generator and a discriminator. The generator accepts moving and target CTs as input and outputs the deformation vector fields (DVFs) to match the two CTs. It is optimized during both forward and backward paths to enhance the bi-directionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network. Landmark-driven loss is used to guide the generator's training. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. RESULTS: We performed four-fold cross-validation on 10 4DCT datasets from the public DIR-Lab dataset and a hold-out test on our clinic dataset, which included 50 4DCT datasets. The DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean and standard deviation of TRE for the DIR-Lab datasets was 1.20 ± 0.72 mm and the mean absolute error (MAE) and structural similarity index (SSIM) for our datasets were 32.1 ± 11.6 HU and 0.979 ± 0.011, respectively. CONCLUSION: The landmark-driven cycle network has been validated and tested for automatic deformable image registration of patients' lung 4DCTs with results comparable to or better than competing methods.


Assuntos
Tomografia Computadorizada Quadridimensional , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Simulação por Computador , Movimento (Física) , Algoritmos
4.
Int J Radiat Oncol Biol Phys ; 119(1): 261-280, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-37972715

RESUMO

Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.


Assuntos
Aprendizado Profundo , Radioterapia (Especialidade) , Humanos , Inteligência Artificial , Redes Neurais de Computação , Benchmarking , Planejamento da Radioterapia Assistida por Computador
5.
Phys Med Biol ; 68(23)2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37972414

RESUMO

The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MR images, we developed a novel model, Hippo-Net, which uses a cascaded model strategy. The proposed model consists of two major parts: (1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. (2) An end-to-end morphological vision transformer network (Franchietal2020Pattern Recognit.102107246, Ranemetal2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW) pp 3710-3719) is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MR images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures. A total of 260 T1w MRI datasets from medical segmentation decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. In five-fold cross-validation, the Dice similarity coefficients were 0.900 ± 0.029 and 0.886 ± 0.031 for the hippocampus proper and parts of the subiculum, respectively. The mean surface distances (MSDs) were 0.426 ± 0.115 mm and 0.401 ± 0.100 mm for the hippocampus proper and parts of the subiculum, respectively. The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MR images. It may facilitate the current clinical workflow and reduce the physicians' effort.


Assuntos
Hipocampo , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Hipocampo/diagnóstico por imagem , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos
6.
ArXiv ; 2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37396614

RESUMO

Background: The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. Purpose: To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MRI images, we developed a novel model, Hippo-Net, which uses a mutually enhanced strategy. Methods: The proposed model consists of two major parts: 1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. 2) An end-to-end morphological vision transformer network is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MRI images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures.A total of 260 T1w MRI datasets from Medical Segmentation Decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. The segmentations were evaluated with two indicators, 1) multiple metrics including the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), volume difference (VD) and center-of-mass distance (COMD); 2) Volumetric Pearson correlation analysis. Results: In five-fold cross-validation, the DSCs were 0.900±0.029 and 0.886±0.031 for the hippocampus proper and parts of the subiculum, respectively. The MSD were 0.426±0.115mm and 0.401±0.100 mm for the hippocampus proper and parts of the subiculum, respectively. Conclusions: The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MRI images. It may facilitate the current clinical workflow and reduce the physicians' effort.

7.
Med Phys ; 50(12): 7791-7805, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37399367

RESUMO

BACKGROUND: Intrafraction motion monitoring in External Beam Radiation Therapy (EBRT) is usually accomplished by establishing a correlation between the tumor and the surrogates such as an external infrared reflector, implanted fiducial markers, or patient skin surface. These techniques either have unstable surrogate-tumor correlation or are invasive. Markerless real-time onboard imaging is a noninvasive alternative that directly images the target motion. However, the low target visibility due to overlapping tissues along the X-ray projection path makes tumor tracking challenging. PURPOSE: To enhance the target visibility in projection images, a patient-specific model was trained to synthesize the Target Specific Digitally Reconstructed Radiograph (TS-DRR). METHODS: Patient-specific models were built using a conditional Generative Adversarial Network (cGAN) to map the onboard projection images to TS-DRR. The standard Pix2Pix network was adopted as our cGAN model. We synthesized the TS-DRR based on the onboard projection images using phantom and patient studies for spine tumors and lung tumors. Using previously acquired CT images, we generated DRR and its corresponding TS-DRR to train the network. For data augmentation, random translations were applied to the CT volume when generating the training images. For the spine, separate models were trained for an anthropomorphic phantom and a patient treated with paraspinal stereotactic body radiation therapy (SBRT). For lung, separate models were trained for a phantom with a spherical tumor insert and a patient treated with free-breathing SBRT. The models were tested using Intrafraction Review Images (IMR) for the spine and CBCT projection images for the lung. The performance of the models was validated using phantom studies with known couch shifts for the spine and known tumor deformation for the lung. RESULTS: Both the patient and phantom studies showed that the proposed method can effectively enhance the target visibility of the projection images by mapping them into synthetic TS-DRR (sTS-DRR). For the spine phantom with known shifts of 1 mm, 2 mm, 3 mm, and 4 mm, the absolute mean errors for tumor tracking were 0.11 ± 0.05 mm in the x direction and 0.25 ± 0.08 mm in the y direction. For the lung phantom with known tumor motion of 1.8 mm, 5.8 mm, and 9 mm superiorly, the absolute mean errors for the registration between the sTS-DRR and ground truth are 0.1 ± 0.3 mm in both the x and y directions. Compared to the projection images, the sTS-DRR has increased the image correlation with the ground truth by around 83% and increased the structural similarity index measure with the ground truth by around 75% for the lung phantom. CONCLUSIONS: The sTS-DRR can greatly enhance the target visibility in the onboard projection images for both the spine and lung tumors. The proposed method could be used to improve the markerless tumor tracking accuracy for EBRT.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Neoplasias Pulmonares , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Movimento (Física) , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Radiografia , Imagens de Fantasmas
8.
Med Phys ; 50(11): 6978-6989, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37211898

RESUMO

BACKGROUND: Independent auditing is a necessary component of a comprehensive quality assurance (QA) program and can also be utilized for continuous quality improvement (QI) in various radiotherapy processes. Two senior physicists at our institution have been performing a time intensive manual audit of cross-campus treatment plans annually, with the aim of further standardizing our planning procedures, updating policies and guidelines, and providing training opportunities of all staff members. PURPOSE: A knowledge-based automated anomaly-detection algorithm to provide decision support and strengthen our manual retrospective plan auditing process was developed. This standardized and improved the efficiency of the assessment of our external beam radiotherapy (EBRT) treatment planning across all eight campuses of our institution. METHODS: A total of 843 external beam radiotherapy plans for 721 lung patients from January 2020 to March 2021 were automatically acquired from our clinical treatment planning and management systems. From each plan, 44 parameters were automatically extracted and pre-processed. A knowledge-based anomaly detection algorithm, namely, "isolation forest" (iForest), was then applied to the plan dataset. An anomaly score was determined for each plan using recursive partitioning mechanism. Top 20 plans ranked with the highest anomaly scores for each treatment technique (2D/3D/IMRT/VMAT/SBRT) including auto-populated parameters were used to guide the manual auditing process and validated by two plan auditors. RESULTS: The two auditors verified that 75.6% plans with the highest iForest anomaly scores have similar concerning qualities that may lead to actionable recommendations for our planning procedures and staff training materials. The time to audit a chart was approximately 20.8 min on average when done manually and 14.0 min when done with the iForest guidance. Approximately 6.8 min were saved per chart with the iForest method. For our typical internal audit review of 250 charts annually, the total time savings are approximately 30 hr per year. CONCLUSION: iForest effectively detects anomalous plans and strengthens our cross-campus manual plan auditing procedure by adding decision support and further improve standardization. Due to the use of automation, this method was efficient and will be used to establish a standard plan auditing procedure, which could occur more frequently.


Assuntos
Radioterapia (Especialidade) , Radioterapia de Intensidade Modulada , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Estudos Retrospectivos , Automação , Pulmão , Radioterapia de Intensidade Modulada/métodos , Dosagem Radioterapêutica
9.
Phys Med Biol ; 68(9)2023 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-36958049

RESUMO

Objective. CBCTs in image-guided radiotherapy provide crucial anatomy information for patient setup and plan evaluation. Longitudinal CBCT image registration could quantify the inter-fractional anatomic changes, e.g. tumor shrinkage, and daily OAR variation throughout the course of treatment. The purpose of this study is to propose an unsupervised deep learning-based CBCT-CBCT deformable image registration which enables quantitative anatomic variation analysis.Approach.The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN). The STN consists of a global generative adversarial network (GlobalGAN) and a local GAN (LocalGAN) to predict the coarse- and fine-scale motions, respectively. The network was trained by minimizing the image similarity loss and the deformable vector field (DVF) regularization loss without the supervision of ground truth DVFs. During the inference stage, patches of local DVF were predicted by the trained LocalGAN and fused to form a whole-image DVF. The local whole-image DVF was subsequently combined with the GlobalGAN generated DVF to obtain the final DVF. The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.Main Results. Qualitatively, the registration results show good alignment between the deformed CBCT images and the target CBCT image. Quantitatively, the average target registration error calculated on the fiducial markers and manually identified landmarks was 1.91 ± 1.18 mm. The average mean absolute error, normalized cross correlation between the deformed CBCT and target CBCT were 33.42 ± 7.48 HU, 0.94 ± 0.04, respectively.Significance. In summary, an unsupervised deep learning-based CBCT-CBCT registration method is proposed and its feasibility and performance in fractionated image-guided radiotherapy is investigated. This promising registration method could provide fast and accurate longitudinal CBCT alignment to facilitate inter-fractional anatomic changes analysis and prediction.


Assuntos
Aprendizado Profundo , Neoplasias , Radioterapia Guiada por Imagem , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Planejamento da Radioterapia Assistida por Computador
10.
IEEE Trans Radiat Plasma Med Sci ; 6(2): 158-181, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35992632

RESUMO

Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.

11.
Med Phys ; 49(12): 7545-7554, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35869866

RESUMO

PURPOSE: A quality assurance (QA) CT scans are usually acquired during cancer radiotherapy to assess for any anatomical changes, which may cause an unacceptable dose deviation and therefore warrant a replan. Accurate and rapid deformable image registration (DIR) is needed to support contour propagation from the planning CT (pCT) to the QA CT to facilitate dose volume histogram (DVH) review. Further, the generated deformation maps are used to track the anatomical variations throughout the treatment course and calculate the corresponding accumulated dose from one or more treatment plans. METHODS: In this study, we aim to develop a deep learning (DL)-based method for automatic deformable registration to align the pCT and the QA CT. Our proposed method, named dual-feasible framework, was implemented by a mutual network that functions as both a forward module and a backward module. The mutual network was trained to predict two deformation vector fields (DVFs) simultaneously, which were then used to register the pCT and QA CT in both directions. A novel dual feasible loss was proposed to train the mutual network. The dual-feasible framework was able to provide additional DVF regularization during network training, which preserves the topology and reduces folding problems. We conducted experiments on 65 head-and-neck cancer patients (228 CTs in total), each with 1 pCT and 2-6 QA CTs. For evaluations, we calculated the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), target registration error (TRE) between the deformed and target images and the Jacobian determinant of the predicted DVFs. RESULTS: Within the body contour, the mean MAE, PSNR, SSIM, and TRE are 122.7 HU, 21.8 dB, 0.62 and 4.1 mm before registration and are 40.6 HU, 30.8 dB, 0.94, and 2.0 mm after registration using the proposed method. These results demonstrate the feasibility and efficacy of our proposed method for pCT and QA CT DIR. CONCLUSION: In summary, we proposed a DL-based method for automatic DIR to match the pCT to the QA CT. Such DIR method would not only benefit current workflow of evaluating DVHs on QA CTs but may also facilitate studies of treatment response assessment and radiomics that depend heavily on the accurate localization of tissues across longitudinal images.


Assuntos
Algoritmos , Neoplasias de Cabeça e Pescoço , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
12.
Med Phys ; 48(11): 7261-7270, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34480801

RESUMO

PURPOSE: High-dose-rate (HDR) prostate brachytherapy involves treatment catheter placement, which is currently empirical and physician dependent. The lack of proper catheter placement guidance during the procedure has left the physicians to rely on a heuristic thinking-while-doing technique, which may cause large catheter placement variation and increased plan quality uncertainty. Therefore, the achievable dose distribution could not be quantified prior to the catheter placement. To overcome this challenge, we proposed a learning-based method to provide HDR catheter placement guidance for prostate cancer patients undergoing HDR brachytherapy. METHODS: The proposed framework consists of deformable registration via registration network (Reg-Net), multi-atlas ranking, and catheter regression. To model the global spatial relationship among multiple organs, binary masks of the prostate and organs-at-risk are transformed into distance maps, which describe the distance of each local voxel to the organ surfaces. For a new patient, the generated distance map is used as fixed image. Reg-Net is utilized to deformably register the distance maps from multi-atlas set to match this patient's distance map and then bring catheter maps from multi-atlas to this patient via spatial transformation. Several criteria, namely prostate volume similarity, multi-organ semantic image similarity, and catheter position criteria (far from the urethra and within the partial prostate), are used for multi-atlas ranking. The top-ranked atlas' deformed catheter positions are selected as the predicted catheter positions for this patient. Finally, catheter regression is used to refine the final catheter positions. A retrospective study on 90 patients with a fivefold cross-validation scheme was used to evaluate the proposed method's feasibility. In order to investigate the impact of plan quality from the predicted catheter pattern, we optimized the source dwell position and time for both the clinical catheter pattern and predicted catheter pattern with the same optimization settings. Comparisons of clinically relevant dose volume histogram (DVH) metrics were completed. RESULTS: For all patients, on average, both the clinical plan dose and predicted plan dose meet the common dose constraints when prostate dose coverage is kept at V100 = 95%. The plans from the predicted catheter pattern have slightly higher hotspot in terms of V150 by 5.0% and V200 by 2.9% on average. For bladder V75, rectum V75, and urethra V125, the average difference is close to zero, and the range of most patients is within ±1 cc. CONCLUSION: We developed a new catheter placement prediction method for HDR prostate brachytherapy based on a deep-learning-based multi-atlas registration algorithm. It has great clinical potential since it can provide catheter location estimation prior to catheter placement, which could reduce the dependence on physicians' experience in catheter implantation and improve the quality of prostate HDR treatment plans. This approach merits further clinical evaluation and validation as a method of quality control for HDR prostate brachytherapy.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias da Próstata , Catéteres , Humanos , Masculino , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Estudos Retrospectivos
13.
J Appl Clin Med Phys ; 22(8): 16-44, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34231970

RESUMO

This paper surveys the data-driven dose prediction methods investigated for knowledge-based planning (KBP) in the last decade. These methods were classified into two major categories-traditional KBP methods and deep-learning (DL) methods-according to their techniques of utilizing previous knowledge. Traditional KBP methods include studies that require geometric or anatomical features to either find the best-matched case(s) from a repository of prior treatment plans or to build dose prediction models. DL methods include studies that train neural networks to make dose predictions. A comprehensive review of each category is presented, highlighting key features, methods, and their advancements over the years. We separated the cited works according to the framework and cancer site in each category. Finally, we briefly discuss the performance of both traditional KBP methods and DL methods, then discuss future trends of both data-driven KBP methods to dose prediction.


Assuntos
Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Humanos , Bases de Conhecimento , Dosagem Radioterapêutica
14.
Phys Med ; 85: 107-122, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33992856

RESUMO

Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Cabeça , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco
15.
Phys Med Biol ; 66(8)2021 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-33780918

RESUMO

The delineation of the prostate and organs-at-risk (OARs) is fundamental to prostate radiation treatment planning, but is currently labor-intensive and observer-dependent. We aimed to develop an automated computed tomography (CT)-based multi-organ (bladder, prostate, rectum, left and right femoral heads (RFHs)) segmentation method for prostate radiation therapy treatment planning. The proposed method uses synthetic MRIs (sMRIs) to offer superior soft-tissue information for male pelvic CT images. Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Dual pyramid networks (DPNs) extracted features from both CTs and sMRIs. A deep attention strategy was integrated into the DPNs to select the most relevant features from both CTs and sMRIs to identify organ boundaries. The CT-based sMRI generated from our previously trained CycleGAN and its corresponding CT images were inputted to the proposed DPNs to provide complementary information for pelvic multi-organ segmentation. The proposed method was trained and evaluated using datasets from 140 patients with prostate cancer, and were then compared against state-of-art methods. The Dice similarity coefficients and mean surface distances between our results and ground truth were 0.95 ± 0.05, 1.16 ± 0.70 mm; 0.88 ± 0.08, 1.64 ± 1.26 mm; 0.90 ± 0.04, 1.27 ± 0.48 mm; 0.95 ± 0.04, 1.08 ± 1.29 mm; and 0.95 ± 0.04, 1.11 ± 1.49 mm for bladder, prostate, rectum, left and RFHs, respectively. Mean center of mass distances was within 3 mm for all organs. Our results performed significantly better than those of competing methods in most evaluation metrics. We demonstrated the feasibility of sMRI-aided DPNs for multi-organ segmentation on pelvic CT images, and its superiority over other networks. The proposed method could be used in routine prostate cancer radiotherapy treatment planning to rapidly segment the prostate and standard OARs.


Assuntos
Processamento de Imagem Assistida por Computador , Pelve , Humanos , Imageamento por Ressonância Magnética , Masculino , Órgãos em Risco , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
16.
J Appl Clin Med Phys ; 22(1): 11-36, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33305538

RESUMO

This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Radiografia , Projetos de Pesquisa
17.
Med Phys ; 48(1): 253-263, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33164219

RESUMO

BACKGROUND AND PURPOSE: Radiotherapeutic dose escalation to dominant intraprostatic lesions (DIL) in prostate cancer could potentially improve tumor control. The purpose of this study was to develop a method to accurately register multiparametric magnetic resonance imaging (MRI) with CBCT images for improved DIL delineation, treatment planning, and dose monitoring in prostate radiotherapy. METHODS AND MATERIALS: We proposed a novel registration framework which considers biomechanical constraint when deforming the MR to CBCT. The registration framework consists of two segmentation convolutional neural networks (CNN) for MR and CBCT prostate segmentation, and a three-dimensional (3D) point cloud (PC) matching network. Image intensity-based rigid registration was first performed to initialize the alignment between MR and CBCT prostate. The aligned prostates were then meshed into tetrahedron elements to generate volumetric PC representation of the prostate shapes. The 3D PC matching network was developed to predict a PC motion vector field which can deform the MRI prostate PC to match the CBCT prostate PC. To regularize the network's motion prediction with biomechanical constraints, finite element (FE) modeling-generated motion fields were used to train the network. MRI and CBCT images of 50 patients with intraprostatic fiducial markers were used in this study. Registration results were evaluated using three metrics including dice similarity coefficient (DSC), mean surface distance (MSD), and target registration error (TRE). In addition to spatial registration accuracy, Jacobian determinant and strain tensors were calculated to assess the physical fidelity of the deformation field. RESULTS: The mean and standard deviation of our method were 0.93 ± 0.01, 1.66 ± 0.10 mm, and 2.68 ± 1.91 mm for DSC, MSD, and TRE, respectively. The mean TRE of the proposed method was reduced by 29.1%, 14.3%, and 11.6% as compared to image intensity-based rigid registration, coherent point drifting (CPD) nonrigid surface registration, and modality-independent neighborhood descriptor (MIND) registration, respectively. CONCLUSION: We developed a new framework to accurately register the prostate on MRI to CBCT images for external beam radiotherapy. The proposed method could be used to aid DIL delineation on CBCT, treatment planning, dose escalation to DIL, and dose monitoring.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Tomografia Computadorizada de Feixe Cônico Espiral , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia
18.
Med Image Anal ; 67: 101845, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33129147

RESUMO

A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia
19.
Med Phys ; 47(12): 6343-6354, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33053202

RESUMO

PURPOSE: Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS: A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS: The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS: We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Razão Sinal-Ruído
20.
Med Phys ; 47(11): 5723-5730, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32969050

RESUMO

PURPOSE: Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments. METHODS: A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity. RESULTS: The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets. CONCLUSION: A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Composição Corporal , Humanos , Processamento de Imagem Assistida por Computador , Tronco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA