Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Med Phys ; 51(4): 2741-2758, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38015793

RESUMO

BACKGROUND: For autosegmentation models, the data used to train the model (e.g., public datasets and/or vendor-collected data) and the data on which the model is deployed in the clinic are typically not the same, potentially impacting the performance of these models by a process called domain shift. Tools to routinely monitor and predict segmentation performance are needed for quality assurance. Here, we develop an approach to perform such monitoring and performance prediction for cardiac substructure segmentation. PURPOSE: To develop a quality assurance (QA) framework for routine or continuous monitoring of domain shift and the performance of cardiac substructure autosegmentation algorithms. METHODS: A benchmark dataset consisting of computed tomography (CT) images along with manual cardiac substructure delineations of 241 breast cancer radiotherapy patients were collected, including one "normal" image domain of clean images and five "abnormal" domains containing images with artifact (metal, contrast), pathology, or quality variations due to scanner protocol differences (field of view, noise, reconstruction kernel, and slice thickness). The QA framework consisted of an image domain shift detector which operated on the input CT images and a shape quality detector on the output of an autosegmentation model, and a regression model for predicting autosegmentation model performance. The image domain shift detector was composed of a trained denoising autoencoder (DAE) and two hand-engineered image quality features to detect normal versus abnormal domains in the input CT images. The shape quality detector was a variational autoencoder (VAE) trained to estimate the shape quality of the auto-segmentation results. The output from the image domain shift and shape quality detectors was used to train a regression model to predict the per-patient segmentation accuracy, measured by Dice coefficient similarity (DSC) to physician contours. Different regression techniques were investigated including linear regression, Bagging, Gaussian process regression, random forest, and gradient boost regression. Of the 241 patients, 60 were used to train the autosegmentation models, 120 for training the QA framework, and the remaining 61 for testing the QA framework. A total of 19 autosegmentation models were used to evaluate QA framework performance, including 18 convolutional neural network (CNN)-based and one transformer-based model. RESULTS: When tested on the benchmark dataset, all abnormal domains resulted in a significant DSC decrease relative to the normal domain for CNN models ( p < 0.001 $p < 0.001$ ), but only for some domains for the transformer model. No significant relationship was found between the performance of an autosegmentation model and scanner protocol parameters ( p = 0.42 $p = 0.42$ ) except noise ( p = 0.01 $p = 0.01$ ). CNN-based autosegmentation models demonstrated a decreased DSC ranging from 0.07 to 0.41 with added noise, while the transformer-based model was not significantly affected (ANOVA, p = 0.99 $p=0.99$ ). For the QA framework, linear regression models with bootstrap aggregation resulted in the highest mean absolute error (MAE) of 0.041 ± 0.002 $0.041 \pm 0.002$ , in predicted DSC (relative to true DSC between autosegmentation and physician). MAE was lowest when combining both input (image) detectors and output (shape) detectors compared to output detectors alone. CONCLUSIONS: A QA framework was able to predict cardiac substructure autosegmentation model performance for clinically anticipated "abnormal" domain shifts.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Coração/diagnóstico por imagem , Mama , Processamento de Imagem Assistida por Computador/métodos
2.
Med Phys ; 48(11): 7172-7188, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34545583

RESUMO

PURPOSE: To develop and evaluate deep learning-based autosegmentation of cardiac substructures from noncontrast planning computed tomography (CT) images in patients undergoing breast cancer radiotherapy and to investigate the algorithm sensitivity to out-of-distribution data such as CT image artifacts. METHODS: Nine substructures including aortic valve (AV), left anterior descending (LAD), tricuspid valve (TV), mitral valve (MV), pulmonic valve (PV), right atrium (RA), right ventricle (RV), left atrium (LA), and left ventricle (LV) were manually delineated by a radiation oncologist on noncontrast CT images of 129 patients with breast cancer; among them 90 were considered in-distribution data, also named as "clean" data. The image/label pairs of 60 subjects were used to train a 3D deep neural network while the remaining 30 were used for testing. The rest of the 39 patients were considered out-of-distribution ("outlier") data, which were used to test robustness. Random rigid transformations were used to augment the dataset during training. We investigated multiple loss functions, including Dice similarity coefficient (DSC), cross-entropy (CE), Euclidean loss as well as the variation and combinations of these, data augmentation, and network size on overall performance and sensitivity to image artifacts due to infrequent events such as the presence of implanted devices. The predicted label maps were compared to the ground-truth labels via DSC and mean and 90th percentile symmetric surface distance (90th-SSD). RESULTS: When modified Dice combined with cross-entropy (MD-CE) was used as the loss function, the algorithm achieved a mean DSC = 0.79 ± 0.07 for chambers and  0.39 ± 0.10 for smaller substructures (valves and LAD). The mean and 90th-SSD were 2.7 ± 1.4 and 6.5 ± 2.8 mm for chambers and 4.1 ± 1.7 and 8.6 ± 3.2 mm for smaller substructures. Models with MD-CE, Dice-CE, MD, and weighted CE loss had highest performance, and were statistically similar. Data augmentation did not affect model performances on both clean and outlier data and model robustness was susceptible to network size. For a certain type of outlier data, robustness can be improved via incorporating them into the training process. The execution time for segmenting each patient was on an average 2.1 s. CONCLUSIONS: A deep neural network provides a fast and accurate segmentation of large cardiac substructures in noncontrast CT images. Model robustness of two types of clinically common outlier data were investigated and potential approaches to improve them were explored. Evaluation of clinical acceptability and integration into clinical workflow are pending.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mama , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/radioterapia , Feminino , Coração , Humanos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X
3.
Med Image Anal ; 47: 31-44, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29674235

RESUMO

Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Algoritmos , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA