Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Phys Med Biol ; 68(19)2023 10 05.
Artigo em Inglês | MEDLINE | ID: mdl-37733068

RESUMO

Objective.Reducing CT radiation dose is an often proposed measure to enhance patient safety, which, however results in increased image noise, translating into degradation of clinical image quality. Several deep learning methods have been proposed for low-dose CT (LDCT) denoising. The high risks posed by possible hallucinations in clinical images necessitate methods which aid the interpretation of deep learning networks. In this study, we aim to use qualitative reader studies and quantitative radiomics studies to assess the perceived quality, signal preservation and statistical feature preservation of LDCT volumes denoised by deep learning. We aim to compare interpretable deep learning methods with classical deep neural networks in clinical denoising performance.Approach.We conducted an image quality analysis study to assess the image quality of the denoised volumes based on four criteria to assess the perceived image quality. We subsequently conduct a lesion detection/segmentation study to assess the impact of denoising on signal detectability. Finally, a radiomic analysis study was performed to observe the quantitative and statistical similarity of the denoised images to standard dose CT (SDCT) images.Main results.The use of specific deep learning based algorithms generate denoised volumes which are qualitatively inferior to SDCT volumes(p< 0.05). Contrary to previous literature, denoising the volumes did not reduce the accuracy of the segmentation (p> 0.05). The denoised volumes, in most cases, generated radiomics features which were statistically similar to those generated from SDCT volumes (p> 0.05).Significance.Our results show that the denoised volumes have a lower perceived quality than SDCT volumes. Noise and denoising do not significantly affect detectability of the abdominal lesions. Denoised volumes also contain statistically identical features to SDCT volumes.


Assuntos
Aprendizado Profundo , Humanos , Doses de Radiação , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído
2.
Med Phys ; 49(7): 4540-4553, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35362172

RESUMO

BACKGROUND: The use of deep learning has successfully solved several problems in the field of medical imaging. Deep learning has been applied to the CT denoising problem successfully. However, the use of deep learning requires large amounts of data to train deep convolutional networks (CNNs). Moreover, due to the large parameter count, such deep CNNs may cause unexpected results. PURPOSE: In this study, we introduce a novel CT denoising framework, which has interpretable behavior and provides useful results with limited data. METHODS: We employ bilateral filtering in both the projection and volume domains to remove noise. To account for nonstationary noise, we tune the σ parameters of the volume for every projection view and every volume pixel. The tuning is carried out by two deep CNNs. Due to the impracticality of labeling, the two-deep CNNs are trained via a Deep-Q reinforcement learning task. The reward for the task is generated by using a custom reward function represented by a neural network. Our experiments were carried out on abdominal scans for the Mayo Clinic the cancer imaging archive (TCIA) dataset and the American association of physicists in medicine (AAPM) Low Dose CT Grand Challenge. RESULTS: Our denoising framework has excellent denoising performance increasing the peak signal to noise ratio (PSNR) from 28.53 to 28.93 and increasing the structural similarity index (SSIM) from 0.8952 to 0.9204. We outperform several state-of-the-art deep CNNs, which have several orders of magnitude higher number of parameters (p-value [PSNR] = 0.000, p-value [SSIM] = 0.000). Our method does not introduce any blurring, which is introduced by mean squared error (MSE) loss-based methods, or any deep learning artifacts, which are introduced by wasserstein generative adversarial network (WGAN)-based models. Our ablation studies show that parameter tuning and using our reward network results in the best possible results. CONCLUSIONS: We present a novel CT denoising framework, which focuses on interpretability to deliver good denoising performance, especially with limited data. Our method outperforms state-of-the-art deep neural networks. Future work will be focused on accelerating our method and generalizing it to different geometries and body parts.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
3.
Proc Inst Mech Eng H ; 236(5): 722-729, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35199619

RESUMO

The primary objective of this study was to develop a method that allows accurate quantification of plantar soft tissue stiffness distribution and homogeneity. The secondary aim of this study was to investigate if the differences in soft tissue stiffness distribution and homogeneity can be detected between ulcerated and non-ulcerated foot. Novel measures of individual pixel stiffness, named as quantitative strainability (QS) and relative strainability (RS) were developed. Strain Elastography data obtained from 39 (nine with active diabetic foot ulcers) patients with diabetic neuropathy. The patients with active diabetic foot ulcer had wound in parts of the foot other than the first metatarsal head and the heel where the elastography measures were conducted. RS was used to measure changes and gradients in the stiffness distribution of plantar soft tissues in participants with and without active diabetic foot ulcer. The plantar soft tissue homogeneity in superior-inferior direction in the left forefoot was found to be significantly (p < 0.05) higher in ulcerated group compared to non-ulcerated group. The assessment of homogeneity showed potentials to further explain the nature of the change in tissue that can increase internal stress. This can have implications in assessing the vulnerability to plantar soft tissue damage and ulceration in diabetes.


Assuntos
Pé Diabético , Técnicas de Imagem por Elasticidade , Fenômenos Biomecânicos , Pé Diabético/diagnóstico por imagem , Pé/diagnóstico por imagem , Calcanhar/diagnóstico por imagem , Humanos
4.
Sci Rep ; 12(1): 17540, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-36266416

RESUMO

Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning (DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding RED-CNN/QAE, two well-established DL-based denoisers in our pipeline, the denoising performance is improved by 10%/82% (RMSE) and 3%/81% (PSNR) in regions containing metal and by 6%/78% (RMSE) and 2%/4% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos , Razão Sinal-Ruído
5.
Med Phys ; 49(8): 5107-5120, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35583171

RESUMO

BACKGROUND: Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. PURPOSE: Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. METHODS: This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. RESULTS: Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. CONCLUSIONS: Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA