Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Med Phys ; 48(4): 1673-1684, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33251619

RESUMO

PURPOSE: Online adaptive radiotherapy would greatly benefit from the development of reliable auto-segmentation algorithms for organs-at-risk and radiation targets. Current practice of manual segmentation is subjective and time-consuming. While deep learning-based algorithms offer ample opportunities to solve this problem, they typically require large datasets. However, medical imaging data are generally sparse, in particular annotated MR images for radiotherapy. In this study, we developed a method to exploit the wealth of publicly available, annotated CT images to generate synthetic MR images, which could then be used to train a convolutional neural network (CNN) to segment the parotid glands on MR images of head and neck cancer patients. METHODS: Imaging data comprised 202 annotated CT and 27 annotated MR images. The unpaired CT and MR images were fed into a 2D CycleGAN network to generate synthetic MR images from the CT images. Annotations of axial slices of the synthetic images were generated by propagating the CT contours. These were then used to train a 2D CNN. We assessed the segmentation accuracy using the real MR images as test dataset. The accuracy was quantified with the 3D Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the approach by a comparison to the interobserver variation determined for the real MR images, as well as to the accuracy when training the 2D CNN to segment the CT images. RESULTS: The determined accuracy (DSC: 0.77±0.07, HD: 18.04±12.59mm, MSD: 2.51±1.47mm) was close to the interobserver variation (DSC: 0.84±0.06, HD: 10.85±5.74mm, MSD: 1.50±0.77mm), as well as to the accuracy when training the 2D CNN to segment the CT images (DSC: 0.81±0.07, HD: 13.00±7.61mm, MSD: 1.87±0.84mm). CONCLUSIONS: The introduced cross-modality learning technique can be of great value for segmentation problems with sparse training data. We anticipate using this method with any nonannotated MRI dataset to generate annotated synthetic MR images of the same type via image style transfer from annotated CT images. Furthermore, as this technique allows for fast adaptation of annotated datasets from one imaging modality to another, it could prove useful for translating between large varieties of MRI contrasts due to differences in imaging protocols within and between institutions.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
2.
Phys Imaging Radiat Oncol ; 15: 1-7, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33043156

RESUMO

BACKGROUND AND PURPOSE: Retrieving quantitative parameters from magnetic resonance imaging (MRI), e.g. for early assessment of radiotherapy treatment response, necessitates contouring regions of interest, which is time-consuming and prone to errors. This becomes more pressing for daily imaging on MRI-guided radiotherapy systems. Therefore, we trained a deep convolutional neural network to automatically contour involved lymph nodes on diffusion-weighted (DW) MRI of head and neck cancer (HNC) patients receiving radiotherapy. MATERIALS AND METHODS: DW-images from 48 HNC patients (18 induction-chemotherapy + chemoradiotherapy; 30 definitive chemoradiotherapy) with 68 involved lymph nodes were obtained on a diagnostic 1.5 T MR-scanner prior to and 2-3 timepoints throughout treatment. A radiation oncologist delineated the lymph nodes on the b = 50 s/mm2 images. A 3D U-net was trained to contour involved lymph nodes. Its performance was evaluated in all 48 patients using 8-fold cross-validation and calculating the Dice similarity coefficient (DSC) and the absolute difference in median apparent diffusion coefficient (ΔADC) between the manual and generated contours. Additionally, the performance was evaluated in an independent dataset of three patients obtained on a 1.5 T MR-Linac. RESULTS: In the definitive chemoradiotherapy patients (n = 96 patients/lymphnodes/timepoints) the DSC was 0.87 (0.81-0.91) [median (1st-3rd quantiles)] and ΔADC was 1.9% (0.8-3.4%) and both remained stable throughout treatment. The network performed worse in the patients receiving induction-chemotherapy (n = 65), with DSC = 0.80 (0.71-0.87) and ΔADC = 3.3% (1.6-8.0%). The network performed well on the MR-Linac data (n = 8) with DSC = 0.80 (0.75-0.82) and ΔADC = 4.0% (0.6-9.1%). CONCLUSIONS: We established accurate automatic contouring of involved lymph nodes for HNC patients on diagnostic and MR-Linac DW-images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA