Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Sci Rep ; 14(1): 10306, 2024 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-38705883

RESUMEN

Multiple ophthalmic diseases lead to decreased capillary perfusion that can be visualized using optical coherence tomography angiography images. To quantify the decrease in perfusion, past studies have often used the vessel density, which is the percentage of vessel pixels in the image. However, this method is often not sensitive enough to detect subtle changes in early pathology. More recent methods are based on quantifying non-perfused or intercapillary areas between the vessels. These methods rely upon the accuracy of vessel segmentation, which is a challenging task and therefore a limiting factor for reliability. Intercapillary areas computed from perfusion-distance measures are less sensitive to errors in the vessel segmentation since the distance to the next vessel is only slightly changing if gaps are present in the segmentation. We present a novel method for distinguishing between glaucoma patients and healthy controls based on features computed from the probability density function of these perfusion-distance areas. The proposed approach is evaluated on different capillary plexuses and outperforms previously proposed methods that use handcrafted features for classification. Moreover the results of the proposed method are in the same range as the ones of convolutional neural networks trained on the raw input images and is therefore a computationally efficient, simple to implement and explainable alternative to deep learning-based approaches.


Asunto(s)
Glaucoma , Vasos Retinianos , Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Humanos , Glaucoma/diagnóstico por imagen , Glaucoma/diagnóstico , Vasos Retinianos/diagnóstico por imagen , Vasos Retinianos/patología , Femenino , Masculino , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos , Capilares/diagnóstico por imagen , Capilares/patología
2.
Sci Rep ; 13(1): 10382, 2023 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-37369731

RESUMEN

Denoising in optical coherence tomography (OCT) is important to compensate the low signal-to-noise ratio originating from laser speckle. In recent years learning algorithms have been established as the most powerful denoising approach. Especially unsupervised denoising is an interesting topic since it is not possible to acquire noise free scans with OCT. However, speckle in in-vivo OCT images contains not only noise but also information about blood flow. Existing OCT denoising algorithms treat all speckle equally and do not distinguish between the noise component and the flow information component of speckle. Consequently they either tend to either remove all speckle or denoise insufficiently. Unsupervised denoising methods tend to remove all speckle but create results that have a blurry impression which is not desired in a clinical application. To this end we propose the concept, that an OCT denoising method should, besides reducing uninformative noise, additionally preserve the flow-related speckle information. In this work, we present a fully unsupervised algorithm for single-frame OCT denoising (SSN2V) that fulfills these goals by incorporating known operators into our network. This additional constraint greatly improves the denoising capability compared to a network without. Quantitative and qualitative results show that the proposed method can effectively reduce the speckle noise in OCT B-scans of the human retina while maintaining a sharp impression outperforming the compared methods.

3.
Biomed Opt Express ; 12(12): 7434-7444, 2021 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-35003844

RESUMEN

Glaucoma is among the leading causes of irreversible blindness worldwide. If diagnosed and treated early enough, the disease progression can be stopped or slowed down. Therefore, it would be very valuable to detect early stages of glaucoma, which are mostly asymptomatic, by broad screening. This study examines different computational features that can be automatically deduced from images and their performance on the classification task of differentiating glaucoma patients and healthy controls. Data used for this study are 3 x 3 mm en face optical coherence tomography angiography (OCTA) images of different retinal projections (of the whole retina, the superficial vascular plexus (SVP), the intermediate capillary plexus (ICP) and the deep capillary plexus (DCP)) centered around the fovea. Our results show quantitatively that the automatically extracted features from convolutional neural networks (CNNs) perform similarly well or better than handcrafted ones when used to distinguish glaucoma patients from healthy controls. On the whole retina projection and the SVP projection, CNNs outperform the handcrafted features presented in the literature. Area under receiver operating characteristics (AUROC) on the SVP projection is 0.967, which is comparable to the best reported values in the literature. This is achieved despite using the small 3 × 3 mm field of view, which has been reported as disadvantageous for handcrafted vessel density features in previous works. A detailed analysis of our CNN method, using attention maps, suggests that this performance increase can be partially explained by the CNN automatically relying more on areas of higher relevance for feature extraction.

4.
Sci Rep ; 9(1): 18814, 2019 12 11.
Artículo en Inglés | MEDLINE | ID: mdl-31827155

RESUMEN

Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details.

5.
Med Phys ; 46(12): e810-e822, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31811794

RESUMEN

BACKGROUND: The beam hardening effect is a typical source of artifacts in x-ray cone beam computed tomography (CBCT). It causes streaks in reconstructions and corrupted Hounsfield units toward the center of objects, widely known as cupping artifacts. PURPOSE: We present a novel efficient projection data-based method for reduction of beam-hardening artifacts and incorporate physical constraints on the shape of the compensation functions. The method is calibration-free and requires no additional knowledge of the scanning setup. METHOD: The mathematical model of the beam hardening effect caused by a single material is analyzed. We show that the effect of beam hardening on the resulting functions on the line integral measurements are monotonous and concave functions of the ideal data. This holds irrespective of any limiting assumptions on the energy dependency of the material, the detector response or properties of the x-ray source. A regression model for the beam hardening effect respecting these theoretical restrictions is proposed. Subsequently, we present an efficient method to estimate the parameters of this model directly in projection domain using an epipolar consistency condition. Computational efficiency is achieved by exploiting the linearity of an intermediate function in the formulation of our optimization problem. RESULTS: Our evaluation shows that the proposed physically constrained ECC 2 algorithm is effective even in challenging measured data scenarios with additional sources of inconsistency. CONCLUSIONS: The combination of mathematical consistency condition and a compensation model that is based on the properties of x-ray physics enables us to improve image quality of measured data retrospectively and to decrease the need for calibration in a data-driven manner.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador/métodos , Artefactos , Modelos Teóricos
6.
Nat Mach Intell ; 1(8): 373-380, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31406960

RESUMEN

We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in its reduction. Furthermore, we also show experimentally that known operators reduce the number of free parameters. We apply this approach to various tasks ranging from CT image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such the concept is widely applicable for many researchers in physics, imaging, and signal processing. We assume that our analysis will support further investigation of known operators in other fields of physics, imaging, and signal processing.

7.
IEEE Trans Med Imaging ; 37(6): 1454-1463, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29870373

RESUMEN

In this paper, we present a new deep learning framework for 3-D tomographic reconstruction. To this end, we map filtered back-projection-type algorithms to neural networks. However, the back-projection cannot be implemented as a fully connected layer due to its memory requirements. To overcome this problem, we propose a new type of cone-beam back-projection layer, efficiently calculating the forward pass. We derive this layer's backward pass as a projection operation. Unlike most deep learning approaches for reconstruction, our new layer permits joint optimization of correction steps in volume and projection domain. Evaluation is performed numerically on a public data set in a limited angle setting showing a consistent improvement over analytical algorithms while keeping the same computational test-time complexity by design. In the region of interest, the peak signal-to-noise ratio has increased by 23%. In addition, we show that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows.


Asunto(s)
Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos
8.
Stud Health Technol Inform ; 243: 202-206, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28883201

RESUMEN

The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.


Asunto(s)
Aprendizaje Automático , Espectroscopía de Resonancia Magnética , Redes Neurales de la Computación , Algoritmos , Encéfalo , Imagen por Resonancia Magnética , Modelos Teóricos , Reconocimiento de Normas Patrones Automatizadas , Procesamiento de Señales Asistido por Computador
9.
Comput Methods Programs Biomed ; 137: 321-328, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28110735

RESUMEN

BACKGROUND AND OBJECTIVES: Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. METHODS: For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. RESULTS: The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. CONCLUSION: R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution.


Asunto(s)
Investigación Biomédica , Análisis por Conglomerados , Metodologías Computacionales , Estudio de Asociación del Genoma Completo , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA