Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
J Imaging ; 9(10)2023 Oct 11.
Article in English | MEDLINE | ID: mdl-37888328

ABSTRACT

Our study explores the feasibility of quantum computing in emission tomography reconstruction, addressing a noisy ill-conditioned inverse problem. In current clinical practice, this is typically solved by iterative methods minimizing a L2 norm. After reviewing quantum computing principles, we propose the use of a commercially available quantum annealer and employ corresponding hybrid solvers, which combine quantum and classical computing to handle more significant problems. We demonstrate how to frame image reconstruction as a combinatorial optimization problem suited for these quantum annealers and hybrid systems. Using a toy problem, we analyze reconstructions of binary and integer-valued images with respect to their image size and compare them to conventional methods. Additionally, we test our method's performance under noise and data underdetermination. In summary, our method demonstrates competitive performance with traditional algorithms for binary images up to an image size of 32×32 on the toy problem, even under noisy and underdetermined conditions. However, scalability challenges emerge as image size and pixel bit range increase, restricting hybrid quantum computing as a practical tool for emission tomography reconstruction until significant advancements are made to address this issue.

2.
Front Cardiovasc Med ; 10: 1167500, 2023.
Article in English | MEDLINE | ID: mdl-37904806

ABSTRACT

Introduction: As the life expectancy of children with congenital heart disease (CHD) is rapidly increasing and the adult population with CHD is growing, there is an unmet need to improve clinical workflow and efficiency of analysis. Cardiovascular magnetic resonance (CMR) is a noninvasive imaging modality for monitoring patients with CHD. CMR exam is based on multiple breath-hold 2-dimensional (2D) cine acquisitions that should be precisely prescribed and is expert and institution dependent. Moreover, 2D cine images have relatively thick slices, which does not allow for isotropic delineation of ventricular structures. Thus, development of an isotropic 3D cine acquisition and automatic segmentation method is worthwhile to make CMR workflow straightforward and efficient, as the present work aims to establish. Methods: Ninety-nine patients with many types of CHD were imaged using a non-angulated 3D cine CMR sequence covering the whole-heart and great vessels. Automatic supervised and semi-supervised deep-learning-based methods were developed for whole-heart segmentation of 3D cine images to separately delineate the cardiac structures, including both atria, both ventricles, aorta, pulmonary arteries, and superior and inferior vena cavae. The segmentation results derived from the two methods were compared with the manual segmentation in terms of Dice score, a degree of overlap agreement, and atrial and ventricular volume measurements. Results: The semi-supervised method resulted in a better overlap agreement with the manual segmentation than the supervised method for all 8 structures (Dice score 83.23 ± 16.76% vs. 77.98 ± 19.64%; P-value ≤0.001). The mean difference error in atrial and ventricular volumetric measurements between manual segmentation and semi-supervised method was lower (bias ≤ 5.2 ml) than the supervised method (bias ≤ 10.1 ml). Discussion: The proposed semi-supervised method is capable of cardiac segmentation and chamber volume quantification in a CHD population with wide anatomical variability. It accurately delineates the heart chambers and great vessels and can be used to accurately calculate ventricular and atrial volumes throughout the cardiac cycle. Such a segmentation method can reduce inter- and intra- observer variability and make CMR exams more standardized and efficient.

3.
Adv Sci (Weinh) ; 10(28): e2206319, 2023 10.
Article in English | MEDLINE | ID: mdl-37582656

ABSTRACT

Deep learning (DL) shows notable success in biomedical studies. However, most DL algorithms work as black boxes, exclude biomedical experts, and need extensive data. This is especially problematic for fundamental research in the laboratory, where often only small and sparse data are available and the objective is knowledge discovery rather than automation. Furthermore, basic research is usually hypothesis-driven and extensive prior knowledge (priors) exists. To address this, the Self-Enhancing Multi-Photon Artificial Intelligence (SEMPAI) that is designed for multiphoton microscopy (MPM)-based laboratory research is presented. It utilizes meta-learning to optimize prior (and hypothesis) integration, data representation, and neural network architecture simultaneously. By this, the method allows hypothesis testing with DL and provides interpretable feedback about the origin of biological information in 3D images. SEMPAI performs multi-task learning of several related tasks to enable prediction for small datasets. SEMPAI is applied on an extensive MPM database of single muscle fibers from a decade of experiments, resulting in the largest joint analysis of pathologies and function for single muscle fibers to date. It outperforms state-of-the-art biomarkers in six of seven prediction tasks, including those with scarce data. SEMPAI's DL models with integrated priors are superior to those without priors and to prior-only approaches.


Subject(s)
Artificial Intelligence , Deep Learning , Neural Networks, Computer , Algorithms , Muscles
4.
Biomed Opt Express ; 12(1): 125-146, 2021 Jan 01.
Article in English | MEDLINE | ID: mdl-33520381

ABSTRACT

We describe a novel method for non-rigid 3-D motion correction of orthogonally raster-scanned optical coherence tomography angiography volumes. This is the first approach that aligns predominantly axial structural features such as retinal layers as well as transverse angiographic vascular features in a joint optimization. Combined with orthogonal scanning and favorization of kinematically more plausible displacements, subpixel alignment and micrometer-scale distortion correction is achieved in all 3 dimensions. As no specific structures are segmented, the method is by design robust to pathologic changes. Furthermore, the method is designed for highly parallel implementation and short runtime, allowing its integration into clinical workflow even for high density or wide-field scans. We evaluated the algorithm with metrics related to clinically relevant features in an extensive quantitative evaluation based on 204 volumetric scans of 17 subjects, including patients with diverse pathologies and healthy controls. Using this method, we achieve state-of-the-art axial motion correction and show significant advances in both transverse co-alignment and distortion correction, especially in the subgroup with pathology.

5.
Biomed Opt Express ; 12(1): 84-99, 2021 Jan 01.
Article in English | MEDLINE | ID: mdl-33520378

ABSTRACT

In this paper we present a fully automated graph-based segmentation algorithm that jointly uses optical coherence tomography (OCT) and OCT angiography (OCTA) data to segment Bruch's membrane (BM). This is especially valuable in cases where the spatial correlation between BM, which is usually not visible on OCT scans, and the retinal pigment epithelium (RPE), which is often used as a surrogate for segmenting BM, is distorted by pathology. We validated the performance of our proposed algorithm against manual segmentation in a total of 18 eyes from healthy controls and patients with diabetic retinopathy (DR), non-exudative age-related macular degeneration (AMD) (early/intermediate AMD, nascent geographic atrophy (nGA) and drusen-associated geographic atrophy (DAGA) and geographic atrophy (GA)), and choroidal neovascularization (CNV) with a mean absolute error of ∼0.91 pixel (∼4.1 µm). This paper suggests that OCT-OCTA segmentation may be a useful framework to complement the growing usage of OCTA in ophthalmic research and clinical communities.

6.
Med Phys ; 46(12): e810-e822, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31811794

ABSTRACT

BACKGROUND: The beam hardening effect is a typical source of artifacts in x-ray cone beam computed tomography (CBCT). It causes streaks in reconstructions and corrupted Hounsfield units toward the center of objects, widely known as cupping artifacts. PURPOSE: We present a novel efficient projection data-based method for reduction of beam-hardening artifacts and incorporate physical constraints on the shape of the compensation functions. The method is calibration-free and requires no additional knowledge of the scanning setup. METHOD: The mathematical model of the beam hardening effect caused by a single material is analyzed. We show that the effect of beam hardening on the resulting functions on the line integral measurements are monotonous and concave functions of the ideal data. This holds irrespective of any limiting assumptions on the energy dependency of the material, the detector response or properties of the x-ray source. A regression model for the beam hardening effect respecting these theoretical restrictions is proposed. Subsequently, we present an efficient method to estimate the parameters of this model directly in projection domain using an epipolar consistency condition. Computational efficiency is achieved by exploiting the linearity of an intermediate function in the formulation of our optimization problem. RESULTS: Our evaluation shows that the proposed physically constrained ECC 2 algorithm is effective even in challenging measured data scenarios with additional sources of inconsistency. CONCLUSIONS: The combination of mathematical consistency condition and a compensation model that is based on the properties of x-ray physics enables us to improve image quality of measured data retrospectively and to decrease the need for calibration in a data-driven manner.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Artifacts , Models, Theoretical
7.
Nat Mach Intell ; 1(8): 373-380, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31406960

ABSTRACT

We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in its reduction. Furthermore, we also show experimentally that known operators reduce the number of free parameters. We apply this approach to various tasks ranging from CT image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such the concept is widely applicable for many researchers in physics, imaging, and signal processing. We assume that our analysis will support further investigation of known operators in other fields of physics, imaging, and signal processing.

8.
Med Phys ; 46(11): 5110-5115, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31389023

ABSTRACT

PURPOSE: Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems. METHODS: PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. RESULTS: The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN. CONCLUSIONS: PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Software , Tomography, X-Ray Computed
9.
Med Image Anal ; 48: 131-146, 2018 08.
Article in English | MEDLINE | ID: mdl-29913433

ABSTRACT

This paper introduces an universal and structure-preserving regularization term, called quantile sparse image (QuaSI) prior. The prior is suitable for denoising images from various medical imaging modalities. We demonstrate its effectiveness on volumetric optical coherence tomography (OCT) and computed tomography (CT) data, which show different noise and image characteristics. OCT offers high-resolution scans of the human retina but is inherently impaired by speckle noise. CT on the other hand has a lower resolution and shows high-frequency noise. For the purpose of denoising, we propose a variational framework based on the QuaSI prior and a Huber data fidelity model that can handle 3-D and 3-D+t data. Efficient optimization is facilitated through the use of an alternating direction method of multipliers (ADMM) scheme and the linearization of the quantile filter. Experiments on multiple datasets emphasize the excellent performance of the proposed method.


Subject(s)
Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, Optical Coherence/methods , Tomography, X-Ray Computed/methods , Animals , Artifacts , Eye/diagnostic imaging , Eye Diseases/diagnostic imaging , Humans , Signal-To-Noise Ratio , Swine
10.
IEEE Trans Med Imaging ; 37(6): 1454-1463, 2018 06.
Article in English | MEDLINE | ID: mdl-29870373

ABSTRACT

In this paper, we present a new deep learning framework for 3-D tomographic reconstruction. To this end, we map filtered back-projection-type algorithms to neural networks. However, the back-projection cannot be implemented as a fully connected layer due to its memory requirements. To overcome this problem, we propose a new type of cone-beam back-projection layer, efficiently calculating the forward pass. We derive this layer's backward pass as a projection operation. Unlike most deep learning approaches for reconstruction, our new layer permits joint optimization of correction steps in volume and projection domain. Evaluation is performed numerically on a public data set in a limited angle setting showing a consistent improvement over analytical algorithms while keeping the same computational test-time complexity by design. In the region of interest, the peak signal-to-noise ratio has increased by 23%. In addition, we show that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows.


Subject(s)
Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Humans
11.
IEEE Trans Med Imaging ; 35(11): 2425-2435, 2016 11.
Article in English | MEDLINE | ID: mdl-27295657

ABSTRACT

We propose a data-driven method for extracting a respiratory surrogate signal from SPECT list-mode data. The approach is based on dimensionality reduction with Laplacian Eigenmaps. By setting a scale parameter adaptively and adding a series of post-processing steps to correct polarity and normalization between projections, we enable fully-automatic operation and deliver a respiratory surrogate signal for the entire SPECT acquisition. We validated the method using 67 patient scans from three acquisition types (myocardial perfusion, liver shunt diagnostic, lung inhalation/perfusion) and an Anzai pressure belt as a gold standard. The proposed method achieved a mean correlation against the Anzai of 0.81 ± 0.17 (median 0.89). In a subsequent analysis, we characterize the performance of the method with respect to count rates and describe a predictor for identifying scans with insufficient statistics. To the best of our knowledge, this is the first large validation of a data-driven respiratory signal extraction method published thus far for SPECT, and our results compare well with those reported in the literature for such techniques applied to other modalities such as MR and PET.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Respiratory Rate/physiology , Signal Processing, Computer-Assisted , Tomography, Emission-Computed, Single-Photon/methods , Area Under Curve , Humans , Respiration
12.
IEEE Trans Med Imaging ; 34(11): 2205-19, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25915956

ABSTRACT

This paper presents the derivation of the Epipolar Consistency Conditions (ECC) between two X-ray images from the Beer-Lambert law of X-ray attenuation and the Epipolar Geometry of two pinhole cameras, using Grangeat's theorem. We motivate the use of Oriented Projective Geometry to express redundant line integrals in projection images and define a consistency metric, which can be used, for instance, to estimate patient motion directly from a set of X-ray images. We describe in detail the mathematical tools to implement an algorithm to compute the Epipolar Consistency Metric and investigate its properties with detailed random studies on both artificial and real FD-CT data. A set of six reference projections of the CT scan of a fish were used to evaluate accuracy and precision of compensating for random disturbances of the ground truth projection matrix using an optimization of the consistency metric. In addition, we use three X-ray images of a pumpkin to prove applicability to real data. We conclude, that the metric might have potential in applications related to the estimation of projection geometry. By expression of redundancy between two arbitrary projection views, we in fact support any device or acquisition trajectory which uses a cone-beam geometry. We discuss certain geometric situations, where the ECC provide the ability to correct 3D motion, without the need for 3D reconstruction.


Subject(s)
Imaging, Three-Dimensional/methods , Tomography, X-Ray Computed/methods , Algorithms , Animals , Fishes , Models, Theoretical , Movement/physiology , Phantoms, Imaging
13.
Phys Med Biol ; 59(16): 4505-24, 2014 Aug 21.
Article in English | MEDLINE | ID: mdl-25069101

ABSTRACT

Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.


Subject(s)
Artifacts , Perfusion Imaging/methods , Radiographic Image Enhancement/methods , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Aged , Algorithms , Brain/blood supply , Brain/diagnostic imaging , Female , Humans , Male , Movement , Phantoms, Imaging , Rotation , Stroke/diagnostic imaging , Stroke/physiopathology , Time Factors
14.
Phys Med Biol ; 59(9): 2265-84, 2014 May 07.
Article in English | MEDLINE | ID: mdl-24731942

ABSTRACT

Today, quantitative analysis of three-dimensional (3D) dynamics of the left ventricle (LV) cannot be performed directly in the catheter lab using a current angiographic C-arm system, which is the workhorse imaging modality for cardiac interventions. Therefore, myocardial wall analysis is completely based on the 2D angiographic images or pre-interventional 3D/4D imaging. In this paper, we present a complete framework to study the ventricular wall motion in 4D (3D+t) directly in the catheter lab. From the acquired 2D projection images, a dynamic 3D surface model of the LV is generated, which is then used to detect ventricular dyssynchrony. Different quantitative features to evaluate LV dynamics known from other modalities (ultrasound, magnetic resonance imaging) are transferred to the C-arm CT data. We use the ejection fraction, the systolic dyssynchrony index a 3D fractional shortening and the phase to maximal contraction (ϕi, max) to determine an indicator of LV dyssynchrony and to discriminate regionally pathological from normal myocardium. The proposed analysis tool was evaluated on simulated phantom LV data with and without pathological wall dysfunctions. The LV data used is publicly available online at https://conrad.stanford.edu/data/heart. In addition, the presented framework was tested on eight clinical patient data sets. The first clinical results demonstrate promising performance of the proposed analysis tool and encourage the application of the presented framework to a larger study in clinical practice.


Subject(s)
Cone-Beam Computed Tomography/methods , Heart Ventricles/diagnostic imaging , Movement , Humans , Phantoms, Imaging
15.
IEEE Trans Med Imaging ; 32(7): 1336-48, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23568497

ABSTRACT

Tissue perfusion measurement using C-arm angiography systems capable of CT-like imaging (C-arm CT) is a novel technique with potentially high benefit for catheter guided treatment of stroke in the interventional suite. However, perfusion C-arm CT (PCCT) is challenging: the slow C-arm rotation speed only allows measuring samples of contrast time attenuation curves (TACs) every 5-6 s if reconstruction algorithms for static data are used. Furthermore, the peak values of the TACs in brain tissue typically lie in a range of 5-30 HU, thus perfusion imaging is very sensitive to noise. We present a dynamic, iterative reconstruction (DIR) approach to reconstruct TACs described by a weighted sum of basis functions. To reduce noise, a regularization technique based on joint bilateral filtering (JBF) is introduced. We evaluated the algorithm with a digital dynamic brain phantom and with data from six canine stroke models. With our dynamic approach, we achieve an average Pearson correlation (PC) of the PCCT canine blood flow maps to co-registered perfusion CT maps of 0.73. This PC is just as high as the PC achieved in a recent PCCT study, which required repeated injections and acquisitions.


Subject(s)
Four-Dimensional Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Perfusion Imaging/methods , Algorithms , Animals , Brain/anatomy & histology , Dogs , Humans , Neuroimaging/methods , Phantoms, Imaging , Reproducibility of Results , Stroke/pathology
16.
Med Phys ; 40(3): 031107, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23464287

ABSTRACT

PURPOSE: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. METHODS: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. RESULTS: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of ≈0.047 ± 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of ≈ 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. CONCLUSIONS: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.


Subject(s)
Coronary Angiography/methods , Heart/physiology , Imaging, Three-Dimensional/methods , Movement , Rotation , Tomography/methods , Animals , Heart Ventricles , Humans , Phantoms, Imaging , Surface Properties , Swine
SELECTION OF CITATIONS
SEARCH DETAIL
...