Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Med Phys ; 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38896829

RESUMEN

BACKGROUND: Head and neck (HN) gross tumor volume (GTV) auto-segmentation is challenging due to the morphological complexity and low image contrast of targets. Multi-modality images, including computed tomography (CT) and positron emission tomography (PET), are used in the routine clinic to assist radiation oncologists for accurate GTV delineation. However, the availability of PET imaging may not always be guaranteed. PURPOSE: To develop a deep learning segmentation framework for automated GTV delineation of HN cancers using a combination of PET/CT images, while addressing the challenge of missing PET data. METHODS: Two datasets were included for this study: Dataset I: 524 (training) and 359 (testing) oropharyngeal cancer patients from different institutions with their PET/CT pairs provided by the HECKTOR Challenge; Dataset II: 90 HN patients(testing) from a local institution with their planning CT, PET/CT pairs. To handle potentially missing PET images, a model training strategy named the "Blank Channel" method was implemented. To simulate the absence of a PET image, a blank array with the same dimensions as the CT image was generated to meet the dual-channel input requirement of the deep learning model. During the model training process, the model was randomly presented with either a real PET/CT pair or a blank/CT pair. This allowed the model to learn the relationship between the CT image and the corresponding GTV delineation based on available modalities. As a result, our model had the ability to handle flexible inputs during prediction, making it suitable for cases where PET images are missing. To evaluate the performance of our proposed model, we trained it using training patients from Dataset I and tested it with Dataset II. We compared our model (Model 1) with two other models which were trained for specific modality segmentations: Model 2 trained with only CT images, and Model 3 trained with real PET/CT pairs. The performance of the models was evaluated using quantitative metrics, including Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff Distance (HD95). In addition, we evaluated our Model 1 and Model 3 using the 359 test cases in Dataset I. RESULTS: Our proposed model(Model 1) achieved promising results for GTV auto-segmentation using PET/CT images, with the flexibility of missing PET images. Specifically, when assessed with only CT images in Dataset II, Model 1 achieved DSC of 0.56 ± 0.16, MSD of 3.4 ± 2.1 mm, and HD95 of 13.9 ± 7.6 mm. When the PET images were included, the performance of our model was improved to DSC of 0.62 ± 0.14, MSD of 2.8 ± 1.7 mm, and HD95 of 10.5 ± 6.5 mm. These results are comparable to those achieved by Model 2 and Model 3, illustrating Model 1's effectiveness in utilizing flexible input modalities. Further analysis using the test dataset from Dataset I showed that Model 1 achieved an average DSC of 0.77, surpassing the overall average DSC of 0.72 among all participants in the HECKTOR Challenge. CONCLUSIONS: We successfully refined a multi-modal segmentation tool for accurate GTV delineation for HN cancer. Our method addressed the issue of missing PET images by allowing flexible data input, thereby providing a practical solution for clinical settings where access to PET imaging may be limited.

2.
Sci Rep ; 14(1): 4678, 2024 02 26.
Artículo en Inglés | MEDLINE | ID: mdl-38409252

RESUMEN

Manual delineation of liver segments on computed tomography (CT) images for primary/secondary liver cancer (LC) patients is time-intensive and prone to inter/intra-observer variability. Therefore, we developed a deep-learning-based model to auto-contour liver segments and spleen on contrast-enhanced CT (CECT) images. We trained two models using 3d patch-based attention U-Net ([Formula: see text] and 3d full resolution of nnU-Net ([Formula: see text] to determine the best architecture ([Formula: see text]. BA was used with vessels ([Formula: see text] and spleen ([Formula: see text] to assess the impact on segment contouring. Models were trained, validated, and tested on 160 ([Formula: see text]), 40 ([Formula: see text]), 33 ([Formula: see text]), 25 (CCH) and 20 (CPVE) CECT of LC patients. [Formula: see text] outperformed [Formula: see text] across all segments with median differences in Dice similarity coefficients (DSC) ranging 0.03-0.05 (p < 0.05). [Formula: see text], and [Formula: see text] were not statistically different (p > 0.05), however, both were slightly better than [Formula: see text] by DSC up to 0.02. The final model, [Formula: see text], showed a mean DSC of 0.89, 0.82, 0.88, 0.87, 0.96, and 0.95 for segments 1, 2, 3, 4, 5-8, and spleen, respectively on entire test sets. Qualitatively, more than 85% of cases showed a Likert score [Formula: see text] 3 on test sets. Our final model provides clinically acceptable contours of liver segments and spleen which are usable in treatment planning.


Asunto(s)
Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Bazo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
3.
Radiother Oncol ; 191: 110061, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38122850

RESUMEN

PURPOSE: Accurate and comprehensive segmentation of cardiac substructures is crucial for minimizing the risk of radiation-induced heart disease in lung cancer radiotherapy. We sought to develop and validate deep learning-based auto-segmentation models for cardiac substructures. MATERIALS AND METHODS: Nineteen cardiac substructures (whole heart, 4 heart chambers, 6 great vessels, 4 valves, and 4 coronary arteries) in 100 patients treated for non-small cell lung cancer were manually delineated by two radiation oncologists. The valves and coronary arteries were delineated as planning risk volumes. An nnU-Net auto-segmentation model was trained, validated, and tested on this dataset with a split ratio of 75:5:20. The auto-segmented contours were evaluated by comparing them with manually drawn contours in terms of Dice similarity coefficient (DSC) and dose metrics extracted from clinical plans. An independent dataset of 42 patients was used for subjective evaluation of the auto-segmentation model by 4 physicians. RESULTS: The average DSCs were 0.95 (+/- 0.01) for the whole heart, 0.91 (+/- 0.02) for 4 chambers, 0.86 (+/- 0.09) for 6 great vessels, 0.81 (+/- 0.09) for 4 valves, and 0.60 (+/- 0.14) for 4 coronary arteries. The average absolute errors in mean/max doses to all substructures were 1.04 (+/- 1.99) Gy and 2.20 (+/- 4.37) Gy. The subjective evaluation revealed that 94% of the auto-segmented contours were clinically acceptable. CONCLUSION: We demonstrated the effectiveness of our nnU-Net model for delineating cardiac substructures, including coronary arteries. Our results indicate that this model has promise for studies regarding radiation dose to cardiac substructures.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Corazón/diagnóstico por imagen , Órganos en Riesgo
4.
Sci Rep ; 13(1): 21797, 2023 12 09.
Artículo en Inglés | MEDLINE | ID: mdl-38066074

RESUMEN

Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7 mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (p ≤ 0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy.


Asunto(s)
Neoplasias de Cabeza y Cuello , Planificación de la Radioterapia Asistida por Computador , Humanos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Cuidados Paliativos , Tomografía de Emisión de Positrones/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Estudios Multicéntricos como Asunto
5.
J Imaging ; 9(11)2023 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-37998092

RESUMEN

In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a total of 35 combinations. Each combination was imaged on a Varian Halcyon (kV/MV CBCT images) and Siemens SOMATOM CT scanner (kV CT images). Two generative adversarial network (GAN)-based models, cycleGAN and conditional GAN (cGAN), were trained to generate synthetic MV (sMV) CBCT images from kV CT/CBCT images using twenty-eight datasets (80%). The pacemakers in the sMV CBCT images and original MV CBCT images were manually delineated and reviewed by three users. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were used to compare contour accuracy. Visual inspection showed the improved visualization of pacemakers on sMV CBCT images compared to original kV CT/CBCT images. Moreover, cGAN demonstrated superior performance in enhancing pacemaker visualization compared to cycleGAN. The mean DSC, HD95, and MSD for contours on sMV CBCT images generated from kV CT/CBCT images were 0.91 ± 0.02/0.92 ± 0.01, 1.38 ± 0.31 mm/1.18 ± 0.20 mm, and 0.42 ± 0.07 mm/0.36 ± 0.06 mm using the cGAN model. Deep learning-based methods, specifically cycleGAN and cGAN, can effectively enhance the visualization of pacemakers in thorax kV CT/CBCT images, therefore improving the contouring precision of these devices.

6.
Comput Med Imaging Graph ; 108: 102286, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37625307

RESUMEN

Deformable image registration (DIR) between daily and reference images is fundamentally important for adaptive radiotherapy. In the last decade, deep learning-based image registration methods have been developed with faster computation time and improved robustness compared to traditional methods. However, the registration performance is often degraded in extra-cranial sites with large volume containing multiple anatomic regions, such as Computed Tomography (CT)/Magnetic Resonance (MR) images used in head and neck (HN) radiotherapy. In this study, we developed a hierarchical deformable image registration (DIR) framework, Patch-based Registration Network (Patch-RegNet), to improve the accuracy and speed of CT-MR and MR-MR registration for head-and-neck MR-Linac treatments. Patch-RegNet includes three steps: a whole volume global registration, a patch-based local registration, and a patch-based deformable registration. Following a whole-volume rigid registration, the input images were divided into overlapping patches. Then a patch-based rigid registration was applied to achieve accurate local alignment for subsequent DIR. We developed a ViT-Morph model, a combination of a convolutional neural network (CNN) and the Vision Transformer (ViT), for the patch-based DIR. A modality independent neighborhood descriptor was adopted in our model as the similarity metric to account for both inter-modality and intra-modality registration. The CT-MR and MR-MR DIR models were trained with 242 CT-MR and 213 MR-MR image pairs from 36 patients, respectively, and both tested with 24 image pairs (CT-MR and MR-MR) from 6 other patients. The registration performance was evaluated with 7 manually contoured organs (brainstem, spinal cord, mandible, left/right parotids, left/right submandibular glands) by comparing with the traditional registration methods in Monaco treatment planning system and the popular deep learning-based DIR framework, Voxelmorph. Evaluation results show that our method outperformed VoxelMorph by 6 % for CT-MR registration, and 4 % for MR-MR registration based on DSC measurements. Our hierarchical registration framework has been demonstrated achieving significantly improved DIR accuracy of both CT-MR and MR-MR registration for head-and-neck MR-guided adaptive radiotherapy.


Asunto(s)
Tronco Encefálico , Imagen Multimodal , Humanos , Redes Neurales de la Computación
7.
Diagnostics (Basel) ; 13(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36832155

RESUMEN

Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.

8.
Adv Radiat Oncol ; 8(4): 101164, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36798731

RESUMEN

Purpose: To determine the dosimetric limitations of daily online adaptive pancreas stereotactic body radiation treatment by using an automated dose escalation approach. Methods and Materials: We collected 108 planning and daily computed tomography (CT) scans from 18 patients (18 patients × 6 CT scans) who received 5-fraction pancreas stereotactic body radiation treatment at MD Anderson Cancer Center. Dose metrics from the original non-dose-escalated clinical plan (non-DE), the dose-escalated plan created on the original planning CT (DE-ORI), and the dose-escalated plan created on daily adaptive radiation therapy CT (DE-ART) were analyzed. We developed a dose-escalation planning algorithm within the radiation treatment planning system to automate the dose-escalation planning process for efficiency and consistency. In this algorithm, the prescription dose of the dose-escalation plan was escalated before violating any organ-at-risk (OAR) dose constraint. Dose metrics for 3 targets (gross target volume [GTV], tumor vessel interface [TVI], and dose-escalated planning target volume [DE-PTV]) and 9 OARs (duodenum, large bowel, small bowel, stomach, spinal cord, kidneys, liver, and skin) for the 3 plans were compared. Furthermore, we evaluated the effectiveness of the online adaptive dose-escalation planning process by quantifying the effect of the interfractional dose distribution variations among the DE-ART plans. Results: The median D95% dose to the GTV/TVI/DE-PTV was 33.1/36.2/32.4 Gy, 48.5/50.9/40.4 Gy, and 53.7/58.2/44.8 Gy for non-DE, DE-ORI, and DE-ART, respectively. Most OAR dose constraints were not violated for the non-DE and DE-ART plans, while OAR constraints were violated for the majority of the DE-ORI patients due to interfractional motion and lack of adaptation. The maximum difference per fraction in D95%, due to interfractional motion, was 2.5 ± 2.7 Gy, 3.0 ± 2.9 Gy, and 2.0 ± 1.8 Gy for the TVI, GTV, and DE-PTV, respectively. Conclusions: Most patients require daily adaptation of the radiation planning process to maximally escalate delivered dose to the pancreatic tumor without exceeding OAR constraints. Using our automated approach, patients can receive higher target dose than standard of care without violating OAR constraints.

9.
Med Phys ; 50(7): 4399-4414, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36698291

RESUMEN

BACKGROUND: MR scans used in radiotherapy can be partially truncated due to the limited field of view (FOV), affecting dose calculation accuracy in MR-based radiation treatment planning. PURPOSE: We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent generative adversarial network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images. METHODS: Computed tomography (CT) and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10-25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy. RESULTS: The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy. CONCLUSIONS: We developed a novel Comp-cycleGAN model that can effectively create sCT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Cintigrafía , Imagen por Resonancia Magnética/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
10.
Pac Symp Biocomput ; 28: 395-406, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36540994

RESUMEN

Deep learning methods for image segmentation and contouring are gaining prominence as an automated approach for delineating anatomical structures in medical images during radiation treatment planning. These contours are used to guide radiotherapy treatment planning, so it is important that contouring errors are flagged before they are used for planning. This creates a need for effective quality assurance methods to enable the clinical use of automated contours in radiotherapy. We propose a novel method for contour quality assurance that requires only shape features, making it independent of the platform used to obtain the images. Our method uses a random forest classifier to identify low-quality contours. On a dataset of 312 kidney contours, our method achieved a cross-validated area under the curve of 0.937 in identifying unacceptable contours. We applied our method to an unlabeled validation dataset of 36 kidney contours. We flagged 6 contours which were then reviewed by a cervix contour specialist, who found that 4 of the 6 contours contained errors. We used Shapley values to characterize the specific shape features that contributed to each contour being flagged, providing a starting point for characterizing the source of the contouring error. These promising results suggest our method is feasible for quality assurance of automated radiotherapy contours.


Asunto(s)
Biología Computacional , Planificación de la Radioterapia Asistida por Computador , Femenino , Humanos , Planificación de la Radioterapia Asistida por Computador/métodos
11.
Sci Rep ; 12(1): 19093, 2022 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-36351987

RESUMEN

Manually delineating upper abdominal organs at risk (OARs) is a time-consuming task. To develop a deep-learning-based tool for accurate and robust auto-segmentation of these OARs, forty pancreatic cancer patients with contrast-enhanced breath-hold computed tomographic (CT) images were selected. We trained a three-dimensional (3D) U-Net ensemble that automatically segments all organ contours concurrently with the self-configuring nnU-Net framework. Our tool's performance was assessed on a held-out test set of 30 patients quantitatively. Five radiation oncologists from three different institutions assessed the performance of the tool using a 5-point Likert scale on an additional 75 randomly selected test patients. The mean (± std. dev.) Dice similarity coefficient values between the automatic segmentation and the ground truth on contrast-enhanced CT images were 0.80 ± 0.08, 0.89 ± 0.05, 0.90 ± 0.06, 0.92 ± 0.03, 0.96 ± 0.01, 0.97 ± 0.01, 0.96 ± 0.01, and 0.96 ± 0.01 for the duodenum, small bowel, large bowel, stomach, liver, spleen, right kidney, and left kidney, respectively. 89.3% (contrast-enhanced) and 85.3% (non-contrast-enhanced) of duodenum contours were scored as a 3 or above, which required only minor edits. More than 90% of the other organs' contours were scored as a 3 or above. Our tool achieved a high level of clinical acceptability with a small training dataset and provides accurate contours for treatment planning.


Asunto(s)
Órganos en Riesgo , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Abdomen/diagnóstico por imagen , Hígado , Planificación de Atención al Paciente , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...