Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Adv Radiat Oncol ; 9(1): 101340, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38260236

RESUMEN

Purpose: Deep learning can be used to automatically digitize interstitial needles in high-dose-rate (HDR) brachytherapy for patients with cervical cancer. The aim of this study was to design a novel attention-gated deep-learning model, which may further improve the accuracy of and better differentiate needles. Methods and Materials: Seventeen patients with cervical cancer with 56 computed tomography-based interstitial HDR brachytherapy plans from the local hospital were retrospectively chosen with the local institutional review board's approval. Among them, 50 plans were randomly selected as the training set and the rest as the validation set. Spatial and channel attention gates (AGs) were added to 3-dimensional convolutional neural networks (CNNs) to highlight needle features and suppress irrelevant regions; this was supposed to facilitate convergence and improve accuracy of automatic needle digitization. Subsequently, the automatically digitized needles were exported to the Oncentra treatment planning system (Elekta Solutions AB, Stockholm, Sweden) for dose evaluation. The geometric and dosimetric accuracy of automatic needle digitization was compared among 3 methods: (1) clinically approved plans with manual needle digitization (ground truth); (2) the conventional deep-learning (CNN) method; and (3) the attention-added deep-learning (CNN + AG) method, in terms of the Dice similarity coefficient (DSC), tip and shaft positioning errors, dose distribution in the high-risk clinical target volume (HR-CTV), organs at risk, and so on. Results: The attention-gated CNN model was superior to CNN without AGs, with a greater DSC (approximately 94% for CNN + AG vs 89% for CNN). The needle tip and shaft errors of the CNN + AG method (1.1 mm and 1.8 mm, respectively) were also much smaller than those of the CNN method (2.0 mm and 3.3 mm, respectively). Finally, the dose difference for the HR-CTV D90 using the CNN + AG method was much more accurate than that using CNN (0.4% and 1.7%, respectively). Conclusions: The attention-added deep-learning model was successfully implemented for automatic needle digitization in HDR brachytherapy, with clinically acceptable geometric and dosimetric accuracy. Compared with conventional deep-learning neural networks, attention-gated deep learning may have superior performance and great clinical potential.

2.
Quant Imaging Med Surg ; 13(4): 2065-2080, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37064379

RESUMEN

Background: The aim of this study was to establish a correlation model between external surface motion and internal diaphragm apex movement using machine learning and to realize online automatic prediction of the diaphragm motion trajectory based on optical surface monitoring. Methods: The optical body surface parameters and kilovoltage (kV) X-ray fluoroscopic images of 7 liver tumor patients were captured synchronously for 50 seconds. The location of the diaphragm apex was manually delineated by a radiation oncologist and automatically detected with a convolutional network model in fluoroscopic images. The correlation model between the body surface parameters and the diaphragm apex of each patient was developed through linear regression (LR) based on synchronous datasets before radiotherapy. Model 1 (M1) was trained with data from the first 30 seconds of the datasets and tested with data from the following 20 seconds of the datasets in the first fraction to evaluate the intra-fractional prediction accuracy. Model 2 (M2) was trained with data from the first 30 seconds of the datasets in the next fraction. The motion trajectory of the diaphragm apex during the following 20 seconds in the next fraction was predicted with M1 and M2, respectively, to evaluate the inter-fractional prediction accuracy. The prediction errors of the 2 models were compared to analyze whether the correlation model needed to be re-established. Results: The average mean absolute error (MAE) and root mean square error (RMSE) using M1 trained with automatic detection location for the first fraction were 3.12±0.80 and 3.82±0.98 mm in the superior-inferior (SI) direction and 1.38±0.24 and 1.74±0.32 mm in the anterior-posterior (AP) direction, respectively. The average MAE and RMSE of M1 versus M2 in the AP direction were 2.63±0.71 versus 1.28±0.48 mm and 3.26±0.90 versus 1.61±0.60 mm, respectively. The average MAE and RMSE of M1 versus M2 in the SI direction were 5.84±1.22 versus 3.37±0.43 mm and 7.22±1.45 versus 4.07±0.54 mm, respectively. The prediction accuracy of M2 was significantly higher than that of M1. Conclusions: This study shows that it is feasible to use optical body surface information to automatically predict the diaphragm motion trajectory. At the same time, it is necessary to establish a new correlation model for the current fraction before each treatment.

3.
Front Oncol ; 11: 588010, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33854959

RESUMEN

BACKGROUND AND PURPOSE: It is extremely important to predict the microvascular invasion (MVI) of hepatocellular carcinoma (HCC) before surgery, which is a key predictor of recurrence and helps determine the treatment strategy before liver resection or liver transplantation. In this study, we demonstrate that a deep learning approach based on contrast-enhanced MR and 3D convolutional neural networks (CNN) can be applied to better predict MVI in HCC patients. MATERIALS AND METHODS: This retrospective study included 114 consecutive patients who were surgically resected from October 2012 to October 2018 with 117 histologically confirmed HCC. MR sequences including 3.0T/LAVA (liver acquisition with volume acceleration) and 3.0T/e-THRIVE (enhanced T1 high resolution isotropic volume excitation) were used in image acquisition of each patient. First, numerous 3D patches were separately extracted from the region of each lesion for data augmentation. Then, 3D CNN was utilized to extract the discriminant deep features of HCC from contrast-enhanced MR separately. Furthermore, loss function for deep supervision was designed to integrate deep features from multiple phases of contrast-enhanced MR. The dataset was divided into two parts, in which 77 HCCs were used as the training set, while the remaining 40 HCCs were used for independent testing. Receiver operating characteristic curve (ROC) analysis was adopted to assess the performance of MVI prediction. The output probability of the model was assessed by the independent student's t-test or Mann-Whitney U test. RESULTS: The mean AUC values of MVI prediction of HCC were 0.793 (p=0.001) in the pre-contrast phase, 0.855 (p=0.000) in arterial phase, and 0.817 (p=0.000) in the portal vein phase. Simple concatenation of deep features using 3D CNN derived from all the three phases improved the performance with the AUC value of 0.906 (p=0.000). By comparison, the proposed deep learning model with deep supervision loss function produced the best results with the AUC value of 0.926 (p=0.000). CONCLUSION: A deep learning framework based on 3D CNN and deeply supervised net with contrast-enhanced MR could be effective for MVI prediction.

4.
Acad Radiol ; 28 Suppl 1: S118-S127, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33303346

RESUMEN

RATIONALE AND OBJECTIVES: To investigate the value of diffusion-weighted magnetic resonance imaging for the prediction of microvascular invasion (MVI) of Hepatocellular Carcinoma (HCC) using Convolutional Neural Networks (CNN). MATERIAL AND METHODS: This study was approved by the local institutional review board and the patients' informed consent was waived. Consecutive 97 subjects with 100 HCCs from July 2012 to October 2018 with surgical resection were retrieved. All subjects with diffusion-weighted imaging (DWI) examinations were performed with single-shot echo-planar imaging in a breath-hold routine. DWI parameters were three b values of 0,100,600 sec/mm2. First, apparent diffusion coefficients (ADC) images were computed by mono-exponentially fitting the three b-value points. Then, multiple 2D axial patches (28 × 28) of HCCs from b0, b100, b600, and ADC images were extracted to increase the dataset for training the CNN model. Finally, the fusion of deep features derived from three b value images and ADC was conducted based on the CNN model for MVI prediction. The data set was split into the training set (60 HCCs) and the independent test set (40 HCCs). The output probability of the deep learning model in the MVI prediction of HCCs was assessed by the independent student's t-test for data following a normal distribution and Mann-Whitney U test for data violating the normal distribution. Receiver operating characteristic curve and area under the curve (AUC) were also used to assess the performance for MVI prediction of HCCs in the fixed test set. RESULTS: Deep features in b600 images yielded better performance (AUC = 0.74, p = 0.004) for MVI prediction than b0 (AUC = 0.69, p = 0.023) and b100 (AUC = 0.734, p = 0.011). Comparatively, deep features in the ADC map obtained lower performance (AUC = 0.71, p = 0.012) than that of the higher b value images (b600) for MVI prediction. Furthermore, the fusion of deep features from the b0, b100, b600, and ADC images yielded the best results (AUC = 0.79, p = 0.002) for MVI prediction. CONCLUSION: Fusion of deep features derived from DWI images concerning the three b-value images and the ADC image yields better performance for MVI prediction.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Estudios Retrospectivos
5.
Front Oncol ; 11: 725507, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34858813

RESUMEN

PURPOSE: We developed a deep learning model to achieve automatic multitarget delineation on planning CT (pCT) and synthetic CT (sCT) images generated from cone-beam CT (CBCT) images. The geometric and dosimetric impact of the model was evaluated for breast cancer adaptive radiation therapy. METHODS: We retrospectively analyzed 1,127 patients treated with radiotherapy after breast-conserving surgery from two medical institutions. The CBCT images for patient setup acquired utilizing breath-hold guided by optical surface monitoring system were used to generate sCT with a generative adversarial network. Organs at risk (OARs), clinical target volume (CTV), and tumor bed (TB) were delineated automatically with a 3D U-Net model on pCT and sCT images. The geometric accuracy of the model was evaluated with metrics, including Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). Dosimetric evaluation was performed by quick dose recalculation on sCT images relying on gamma analysis and dose-volume histogram (DVH) parameters. The relationship between ΔD95, ΔV95 and DSC-CTV was assessed to quantify the clinical impact of the geometric changes of CTV. RESULTS: The ranges of DSC and HD95 were 0.73-0.97 and 2.22-9.36 mm for pCT, 0.63-0.95 and 2.30-19.57 mm for sCT from institution A, 0.70-0.97 and 2.10-11.43 mm for pCT from institution B, respectively. The quality of sCT was excellent with an average mean absolute error (MAE) of 71.58 ± 8.78 HU. The mean gamma pass rate (3%/3 mm criterion) was 91.46 ± 4.63%. DSC-CTV down to 0.65 accounted for a variation of more than 6% of V95 and 3 Gy of D95. DSC-CTV up to 0.80 accounted for a variation of less than 4% of V95 and 2 Gy of D95. The mean ΔD90/ΔD95 of CTV and TB were less than 2Gy/4Gy, 4Gy/5Gy for all the patients. The cardiac dose difference in left breast cancer cases was larger than that in right breast cancer cases. CONCLUSIONS: The accurate multitarget delineation is achievable on pCT and sCT via deep learning. The results show that dose distribution needs to be considered to evaluate the clinical impact of geometric variations during breast cancer radiotherapy.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 853-856, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946029

RESUMEN

The malignancy characterization of hepatocellular carcinoma (HCC) is remarkably significant in clinical practice. In this work, we propose a deeply supervised cross modal transfer learning method to remarkably improve the malignancy characterization of HCC based on non-enhanced MR. First, we use samples of non-enhanced and contrast-enhanced MR for pre-training a deep learning network to learn the cross modal relationship between the non-enhanced modal and enhanced modal. Then, the parameters of the pre-trained across modal representation are transferred to a second deep learning model for fine-tuning based only on non-enhanced MR for malignancy characterization of HCC. Specifically, a deeply supervised network is designed to enhance the stability of the second deep learning model and further improve the performance of lesion characterization. Importantly, only non-enhanced MR of HCC is required for the malignancy characterization in the training and test phase of the second deep learning model. Experiments of one hundred and fifteen clinical HCCs demonstrate that the proposed deeply supervised cross modal transfer learning method can significantly improve the malignancy characterization of HCC based on non-enhanced MR.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA