Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Magn Reson Imaging ; 98: 97-104, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36681310

RESUMO

INTRODUCTION: Despite a growing interest in lung MRI, its broader use in a clinical setting remains challenging. Several factors limit the image quality of lung MRI, such as the extremely short T2 and T2* relaxation times of the lung parenchyma and cardiac and breathing motion. Zero Echo Time (ZTE) sequences are sensitive to short T2 and T2* species paving the way to improved "CT-like" MR images. To overcome this limitation, a retrospective respiratory gated version of ZTE (ZTE4D) which can obtain images in 16 different respiratory phases during free breathing was developed. Initial performance of ZTE4D have shown motion artifacts. To improve image quality, deep learning with fully convolutional neural networks (FCNNs) has been proposed. CNNs has been widely used for MR imaging, but it has not been used for improving free-breathing lung imaging yet. Our proposed pipeline facilitates the clinical work with patients showing difficulties/uncapable to perform breath-holding, or when the different gating techniques are not efficient due to the irregular respiratory pace. MATERIALS AND METHODS: After signed informed consent and IRB approval, ZTE4D free breathing and breath-hold ZTE3D images were obtained from 10 healthy volunteers on a 1.5 T MRI scanner (GE Healthcare Signa Artist, Waukesha, WI). ZTE4D acquisition captured all 16 phases of the respiratory cycle. For the ZTE breath-hold, the subjects were instructed to hold their breath in 5 different inflation levels ranging from full expiration to full inspiration. The training dataset consisting of ZTE-BH images of 10 volunteers was split into 8 volunteers for training, 1 for validation and 1 for testing. In total 800 ZTE breath-hold images were constructed by adding Gaussian noise and performing image transformations (translations, rotations) to imitate the effect of motion in the respiratory cycle, and blurring from varying diaphragm positions, as it appears for ZTE4D. These sets were used to train a FCNN model to remove the artificially added noise and transformations from the ZTE breath-hold images and reproduce the original quality of the images. Mean squared error (MSE) was used as loss function. The remaining 2 healthy volunteer's ZTE4D images were used to test the model and qualitatively assess the predicted images. RESULTS: Our model obtained a MSE of 0.09% on the training set and 0.135% on the validation set. When tested on unseen data the predicted images from our model improved the contrast of the pulmonary parenchyma against air filled regions (airways or air trapping). The SNR of the lung parenchyma was quantitatively improved by a factor of 1.98 and the CNR lung- blood, which is indicating the visibility of the intrapulmonary vessels, was improved by 4.2%. Our network was able to reduce ghosting artifacts, such as diaphragm movement and blurring, and enhancing image quality. DISCUSSION: Free-breathing 3D and 4D lung imaging with MRI is feasible, however its quality is not yet acceptable for clinical use. This can be improved with deep learning techniques. Our FCNN improves the visual image quality and reduces artifacts of free-breathing ZTE4D. Our main goal was rather to remove ghosting artifacts from the ZTE4D images, to improve diagnostic quality of the images. As main results of the network, diaphragm contour increased with sharper edges by visual inspection and less blurring of the anatomical structures and lung parenchyma. CONCLUSION: With FCNNs, image quality of free breathing ZTE4D lung MRI can be improved and enable better visualization of the lung parenchyma in different respiratory phases.


Assuntos
Aprendizado Profundo , Humanos , Estudos Retrospectivos , Interpretação de Imagem Assistida por Computador/métodos , Respiração , Imageamento por Ressonância Magnética/métodos
2.
Diagnostics (Basel) ; 11(2)2021 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-33671533

RESUMO

Radiomics applied in MRI has shown promising results in classifying prostate cancer lesions. However, many papers describe single-center studies without external validation. The issues of using radiomics models on unseen data have not yet been sufficiently addressed. The aim of this study is to evaluate the generalizability of radiomics models for prostate cancer classification and to compare the performance of these models to the performance of radiologists. Multiparametric MRI, photographs and histology of radical prostatectomy specimens, and pathology reports of 107 patients were obtained from three healthcare centers in the Netherlands. By spatially correlating the MRI with histology, 204 lesions were identified. For each lesion, radiomics features were extracted from the MRI data. Radiomics models for discriminating high-grade (Gleason score ≥ 7) versus low-grade lesions were automatically generated using open-source machine learning software. The performance was tested both in a single-center setting through cross-validation and in a multi-center setting using the two unseen datasets as external validation. For comparison with clinical practice, a multi-center classifier was tested and compared with the Prostate Imaging Reporting and Data System version 2 (PIRADS v2) scoring performed by two expert radiologists. The three single-center models obtained a mean AUC of 0.75, which decreased to 0.54 when the model was applied to the external data, the radiologists obtained a mean AUC of 0.46. In the multi-center setting, the radiomics model obtained a mean AUC of 0.75 while the radiologists obtained a mean AUC of 0.47 on the same subset. While radiomics models have a decent performance when tested on data from the same center(s), they may show a significant drop in performance when applied to external data. On a multi-center dataset our radiomics model outperformed the radiologists, and thus, may represent a more accurate alternative for malignancy prediction.

3.
Cancers (Basel) ; 14(1)2021 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-35008177

RESUMO

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA