Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Phys ; 48(7): 3702-3713, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33905558

RESUMEN

PURPOSE: Despite the widespread availability of in-treatment room cone beam computed tomography (CBCT) imaging, due to the lack of reliable segmentation methods, CBCT is only used for gross set up corrections in lung radiotherapies. Accurate and reliable auto-segmentation tools could potentiate volumetric response assessment and geometry-guided adaptive radiation therapies. Therefore, we developed a new deep learning CBCT lung tumor segmentation method. METHODS: The key idea of our approach called cross-modality educed distillation (CMEDL) is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training to extract more informative features during training. We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDNs) using unpaired CBCT and MRI datasets. UDA approach uses CBCT and MRI that are not aligned and may arise from different sets of patients. The UDA network synthesizes pseudo MRI from CBCT images. The SDN consists of teacher MRI and student CBCT segmentation networks. Feature distillation regularizes the student network to extract CBCT features that match the statistical distribution of MRI features extracted by the teacher network and obtain better differentiation of tumor from background. The UDA network was implemented with a cycleGAN improved with contextual losses separately on Unet and dense fully convolutional segmentation networks (DenseFCN). Performance comparisons were done against CBCT only using 2D and 3D networks. We also compared against an alternative framework that used UDA with MR segmentation network, whereby segmentation was done on the synthesized pseudo MRI representation. All networks were trained with 216 weekly CBCTs and 82 T2-weighted turbo spin echo MRI acquired from different patient cohorts. Validation was done on 20 weekly CBCTs from patients not used in training. Independent testing was done on 38 weekly CBCTs from patients not used in training or validation. Segmentation accuracy was measured using surface Dice similarity coefficient (SDSC) and Hausdroff distance at 95th percentile (HD95) metrics. RESULTS: The CMEDL approach significantly improved (p < 0.001) the accuracy of both Unet (SDSC of 0.83 ± 0.08; HD95 of 7.69 ± 7.86 mm) and DenseFCN (SDSC of 0.75 ± 0.13; HD95 of 11.42 ± 9.87 mm) over CBCT only 2DUnet (SDSC of 0.69 ± 0.11; HD95 of 21.70 ± 16.34 mm), 3D Unet (SDSC of 0.72 ± 0.20; HD95 15.01 ± 12.98 mm), and DenseFCN (SDSC of 0.66 ± 0.15; HD95 of 22.15 ± 17.19 mm) networks. The alternate framework using UDA with the MRI network was also more accurate than the CBCT only methods but less accurate the CMEDL approach. CONCLUSIONS: Our results demonstrate feasibility of the introduced CMEDL approach to produce reasonably accurate lung cancer segmentation from CBCT images. Further validation on larger datasets is necessary for clinical translation.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Imagen por Resonancia Magnética
2.
Otol Neurotol ; 38(6): 828-832, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28383464

RESUMEN

HYPOTHESIS: The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. BACKGROUND: An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. METHODS: Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. RESULTS: Interobserver variability was good (average absolute difference: 0.77 ±â€Š0.42 mm) using standard views and fair (average absolute difference: 0.90 ±â€Š0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ±â€Š0.09 mm for the standard views and 0.38 ±â€Š0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. CONCLUSION: There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.


Asunto(s)
Conducto Coclear/anatomía & histología , Conducto Coclear/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA