Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Magn Reson Med ; 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-34652819

RESUMO

PURPOSE: To design and manufacture a pelvis phantom for magnetic resonance (MR)-guided prostate interventions, such as MRGB (MR-guided biopsy) or brachytherapy seed placement. METHODS: The phantom was designed to mimic the human pelvis incorporating bones, bladder, prostate with four lesions, urethra, arteries, veins, and six lymph nodes embedded in ballistic gelatin. A hollow rectum enables transrectal access to the prostate. To demonstrate the feasibility of the phantom for minimal invasive MRI-guided interventions, a targeted inbore MRGB was performed. The needle probe was rectally inserted and guided using an MRI-compatible remote controlled manipulator (RCM). RESULTS: The presented pelvis phantom has realistic imaging properties for MR imaging (MRI), computed tomography (CT) and ultrasound (US). In the targeted inbore MRGB, a prostate lesion was successfully hit with an accuracy of 3.5 mm. The experiment demonstrates that the limited size of the rectum represents a realistic impairment for needle placements. CONCLUSION: The phantom provides a valuable platform for evaluating the performance of MRGB systems. Interventionalists can use the phantom to learn how to deal with challenging situations, without risking harm to patients.

2.
Int J Comput Assist Radiol Surg ; 16(8): 1277-1285, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33934313

RESUMO

PURPOSE: Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the diagnosis of medical conditions and the success of interventional medical procedures. To overcome the shortage of data, we present a method that allows the generation of annotated multimodal 4D datasets. METHODS: We use a CycleGAN network architecture to generate multimodal synthetic data from the 4D extended cardiac-torso (XCAT) phantom and real patient data. Organ masks are provided by the XCAT phantom; therefore, the generated dataset can serve as ground truth for image segmentation and registration. Realistic simulation of respiration and heartbeat is possible within the XCAT framework. To underline the usability as a registration ground truth, a proof of principle registration is performed. RESULTS: Compared to real patient data, the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. The generated T1-weighted magnetic resonance imaging, computed tomography (CT), and cone beam CT images are inherently co-registered. Thus, the synthetic dataset allowed us to optimize registration parameters of a multimodal non-rigid registration, utilizing liver organ masks for evaluation. CONCLUSION: Our proposed framework provides not only annotated but also multimodal synthetic data which can serve as a ground truth for various tasks in medical imaging processing. We demonstrated the applicability of synthetic data for the development of multimodal medical image registration algorithms.


Assuntos
Algoritmos , Simulação por Computador , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Humanos
3.
Magn Reson Med ; 86(1): 471-486, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33547656

RESUMO

PURPOSE: To develop an accelerated postprocessing pipeline for reproducible and efficient assessment of white matter lesions using quantitative magnetic resonance fingerprinting (MRF) and deep learning. METHODS: MRF using echo-planar imaging (EPI) scans with varying repetition and echo times were acquired for whole brain quantification of T 1 and T 2 ∗ in 50 subjects with multiple sclerosis (MS) and 10 healthy volunteers along 2 centers. MRF T 1 and T 2 ∗ parametric maps were distortion corrected and denoised. A CNN was trained to reconstruct the T 1 and T 2 ∗ parametric maps, and the WM and GM probability maps. RESULTS: Deep learning-based postprocessing reduced reconstruction and image processing times from hours to a few seconds while maintaining high accuracy, reliability, and precision. Mean absolute error performed the best for T 1 (deviations 5.6%) and the logarithmic hyperbolic cosinus loss the best for T 2 ∗ (deviations 6.0%). CONCLUSIONS: MRF is a fast and robust tool for quantitative T 1 and T 2 ∗ mapping. Its long reconstruction and several postprocessing steps can be facilitated and accelerated using deep learning.


Assuntos
Aprendizado Profundo , Substância Branca , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Espectroscopia de Ressonância Magnética , Imagens de Fantasmas , Reprodutibilidade dos Testes , Substância Branca/diagnóstico por imagem
4.
NMR Biomed ; 34(4): e4474, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33480128

RESUMO

Quantitative 23 Na magnetic resonance imaging (MRI) provides tissue sodium concentration (TSC), which is connected to cell viability and vitality. Long acquisition times are one of the most challenging aspects for its clinical establishment. K-space undersampling is an approach for acquisition time reduction, but generates noise and artifacts. The use of convolutional neural networks (CNNs) is increasing in medical imaging and they are a useful tool for MRI postprocessing. The aim of this study is 23 Na MRI acquisition time reduction by k-space undersampling. CNNs were applied to reduce the resulting noise and artifacts. A retrospective analysis from a prospective study was conducted including image datasets from 46 patients (aged 72 ± 13 years; 25 women, 21 men) with ischemic stroke; the 23 Na MRI acquisition time was 10 min. The reconstructions were performed with full dataset (FI) and with a simulated dataset an image that was acquired in 2.5 min (RI). Eight different CNNs with either U-Net-based or ResNet-based architectures were implemented with RI as input and FI as label, using batch normalization and the number of filters as varying parameters. Training was performed with 9500 samples and testing included 400 samples. CNN outputs were evaluated based on signal-to-noise ratio (SNR) and structural similarity (SSIM). After quantification, TSC error was calculated. The image quality was subjectively rated by three neuroradiologists. Statistical significance was evaluated by Student's t-test. The average SNR was 21.72 ± 2.75 (FI) and 10.16 ± 0.96 (RI). U-Nets increased the SNR of RI to 43.99 and therefore performed better than ResNet. SSIM of RI to FI was improved by three CNNs to 0.91 ± 0.03. CNNs reduced TSC error by up to 15%. The subjective rating of CNN-generated images showed significantly better results than the subjective image rating of RI. The acquisition time of 23 Na MRI can be reduced by 75% due to postprocessing with a CNN on highly undersampled data.

5.
IEEE Trans Biomed Eng ; 68(5): 1518-1526, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33275574

RESUMO

OBJECTIVE: Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment in various clinical scenarios. We present a fully automatic method for the extraction and differentiation of the arterial and venous vessel trees from abdominal contrast enhanced computed tomography (CE-CT) volumes using convolutional neural networks (CNNs). METHODS: We used a novel ratio-based sampling method to train 2D and 3D versions of the U-Net, the V-Net and the DeepVesselNet. Networks were trained with a combination of the Dice and cross entropy loss. Performance was evaluated on 20 IRCAD subjects. Best performing networks were combined into an ensemble. We investigated seven different weighting schemes. Trained networks were additionally applied to 26 BTCV cases to validate the generalizability. RESULTS: Based on our experiments, the optimal configuration is an equally weighted ensemble of 2D and 3D U- and V-Nets. Our method achieved Dice similarity coefficients of 0.758 ± 0.050 (veins) and 0.838 ± 0.074 (arteries) on the IRCAD data set. Application to the BTCV data set showed a high transfer ability. CONCLUSION: Abdominal vascular structures can be segmented more accurately using ensembles than individual CNNs. 2D and 3D networks have complementary strengths and weaknesses. Our ensemble of 2D and 3D U-Nets and V-Nets in combination with ratio-based sampling achieves a high agreement with manual annotations for both artery and vein segmentation. Our results surpass other state-of-the-art methods. SIGNIFICANCE: Our segmentation pipeline can provide valuable information for the planning of living donor organ transplantations.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Abdome/diagnóstico por imagem , Artérias , Humanos , Processamento de Imagem Assistida por Computador
6.
Magn Reson Imaging ; 75: 116-123, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32987123

RESUMO

Development of a deterministic algorithm for automated detection of the Arterial Input Function (AIF) in DCE-MRI of colorectal cancer. Using a filter pipeline to determine the AIF region of interest. Comparison to algorithms from literature with mean squared error and quantitative perfusion parameter Ktrans. The AIF found by our algorithm has a lower mean squared error (0.0022 ±â€¯0.0021) in reference to the manual annotation than comparable algorithms. The error of Ktrans (21.52 ±â€¯17.2%) is lower than that of other algorithms. Our algorithm generates reproducible results and thus supports a robust and comparable perfusion analysis.


Assuntos
Algoritmos , Artérias/diagnóstico por imagem , Artérias/fisiopatologia , Circulação Sanguínea , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/fisiopatologia , Imageamento por Ressonância Magnética , Automação , Meios de Contraste , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...