Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Microsc ; 287(2): 81-92, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35638174

RESUMO

High-resolution X-ray microscopy (XRM) is gaining interest for biological investigations of extremely small-scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometres in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open-source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data-driven way using the gradient-based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self-supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artefacts and decreases the difference in grey values between outer and inner bone by 68-94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat-panel computed tomography systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step towards the goal of reducing the resolution limit of in vivo bone imaging to the single micrometre domain.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Animais , Calibragem , Processamento de Imagem Assistida por Computador/métodos , Camundongos , Microscopia/métodos , Estudos Retrospectivos , Raios X
2.
Phys Med Biol ; 68(20)2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37779386

RESUMO

Objective.Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry.Approach.The CT fan-beam reconstruction is analytically derived with respect to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion-affected reconstruction alone.Main results.The algorithm improves the structural similarity index measure (SSIM) from 0.848 for the initial motion-affected reconstruction to 0.946 after compensation. It also generalizes to real fan-beam sinograms which are rebinned from a helical trajectory where the SSIM increases from 0.639 to 0.742.Significance.Using the proposed method, we are the first to optimize an autofocus-inspired algorithm based on analytical gradients. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico , Artefatos
3.
Sci Rep ; 12(1): 17540, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-36266416

RESUMO

Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning (DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding RED-CNN/QAE, two well-established DL-based denoisers in our pipeline, the denoising performance is improved by 10%/82% (RMSE) and 3%/81% (PSNR) in regions containing metal and by 6%/78% (RMSE) and 2%/4% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos , Razão Sinal-Ruído
4.
Med Phys ; 49(8): 5107-5120, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35583171

RESUMO

BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
5.
IEEE J Biomed Health Inform ; 25(7): 2698-2709, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33351771

RESUMO

Quantitative assessment of cardiac left ventricle (LV) morphology is essential to assess cardiac function and improve the diagnosis of different cardiovascular diseases. In current clinical practice, LV quantification depends on the measurement of myocardial shape indices, which is usually achieved by manual contouring of the endo- and epicardial. However, this process subjected to inter and intra-observer variability, and it is a time-consuming and tedious task. In this article, we propose a spatio-temporal multi-task learning approach to obtain a complete set of measurements quantifying cardiac LV morphology, regional-wall thickness (RWT), and additionally detecting the cardiac phase cycle (systole and diastole) for a given 3D Cine-magnetic resonance (MR) image sequence. We first segment cardiac LVs using an encoder-decoder network and then introduce a multitask framework to regress 11 LV indices and classify the cardiac phase, as parallel tasks during model optimization. The proposed deep learning model is based on the 3D spatio-temporal convolutions, which extract spatial and temporal features from MR images. We demonstrate the efficacy of the proposed method using cine-MR sequences of 145 subjects and comparing the performance with other state-of-the-art quantification methods. The proposed method obtained high prediction accuracy, with an average mean absolute error (MAE) of 129 mm 2, 1.23 mm, 1.76 mm, Pearson correlation coefficient (PCC) of 96.4%, 87.2%, and 97.5% for LV and myocardium (Myo) cavity regions, 6 RWTs, 3 LV dimensions, and an error rate of 9.0% for phase classification. The experimental results highlight the robustness of the proposed method, despite varying degrees of cardiac morphology, image appearance, and low contrast in the cardiac MR sequences.


Assuntos
Ventrículos do Coração , Imagem Cinética por Ressonância Magnética , Coração , Ventrículos do Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Radiografia
6.
IEEE Trans Med Imaging ; 40(7): 1838-1851, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33729930

RESUMO

Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods.


Assuntos
Coração , Processamento de Imagem Assistida por Computador , Entropia , Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
7.
J Magn Reson ; 332: 107079, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34638086

RESUMO

During oil and gas exploration, it is difficult to quantitatively evaluate fluid components and accurately calculate the saturation of different fluids because of the overlapping of fluid components on 2D NMR spectrum. In this paper, Blind Source Separation (BSS) is proposed to separate fluid components, which utilizes the statistical independence of fluid signals on 2D NMR spectrum. Fast Independent Component Analysis (FastICA) is employed for the inverted NMR spectrums in an entire logged interval to obtain the residual information to determine the number of fluid components. Based on the determined number of fluid components, Nonnegative matrix factorization (NMF) is used to obtain the features of fluid components on NMR spectrum and the region on 2D NMR spectrum is divided into different regions. The overlapping regions are classified by distance or distance and T1/T2 to obtain the modified NMR spectrum. Through T2-D and T1-T2 numerical simulation, the fluid saturations calculated by the proposed method and NMF are compared to verify the effectiveness of the proposed method. The results showed that the proposed method can be used to determine the number of fluid components effectively, and the calculated fluid saturations are more accurate than that obtained by NMF.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA