RESUMEN
Acute ischemic stroke (AIS) is a leading global cause of mortality and morbidity. Improving long-term outcome predictions after thrombectomy can enhance treatment quality by supporting clinical decision-making. With the advent of interpretable deep learning methods in recent years, it is now possible to develop trustworthy, high-performing prediction models. This study introduces an uncertainty-aware, graph deep learning model that predicts endovascular thrombectomy outcomes using clinical features and imaging biomarkers. The model targets long-term functional outcomes, defined by the three-month modified Rankin Score (mRS), and mortality rates. A sample of 220 AIS patients in the anterior circulation who underwent endovascular thrombectomy (EVT) was included, with 81 (37%) demonstrating good outcomes (mRS ≤ 2). The performance of the different algorithms evaluated was comparable, with the maximum validation under the curve (AUC) reaching 0.87 using graph convolutional networks (GCN) for mRS prediction and 0.86 using fully connected networks (FCN) for mortality prediction. Moderate performance was obtained at admission (AUC of 0.76 using GCN), which improved to 0.84 post-thrombectomy and to 0.89 a day after stroke. Reliable uncertainty prediction of the model could be demonstrated.
Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular Isquémico , Humanos , Incertidumbre , Algoritmos , TrombectomíaRESUMEN
(1) Background: to test the diagnostic performance of a fully convolutional neural network-based software prototype for clot detection in intracranial arteries using non-enhanced computed tomography (NECT) imaging data. (2) Methods: we retrospectively identified 85 patients with stroke imaging and one intracranial vessel occlusion. An automated clot detection prototype computed clot location, clot length, and clot volume in NECT scans. Clot detection rates were compared to the visual assessment of the hyperdense artery sign by two neuroradiologists. CT angiography (CTA) was used as the ground truth. Additionally, NIHSS, ASPECTS, type of therapy, and TOAST were registered to assess the relationship between clinical parameters, image results, and chosen therapy. (3) Results: the overall detection rate of the software was 66%, while the human readers had lower rates of 46% and 24%, respectively. Clot detection rates of the automated software were best in the proximal middle cerebral artery (MCA) and the intracranial carotid artery (ICA) with 88-92% followed by the more distal MCA and basilar artery with 67-69%. There was a high correlation between greater clot length and interventional thrombectomy and between smaller clot length and rather conservative treatment. (4) Conclusions: the automated clot detection prototype has the potential to detect intracranial arterial thromboembolism in NECT images, particularly in the ICA and MCA. Thus, it could support radiologists in emergency settings to speed up the diagnosis of acute ischemic stroke, especially in settings where CTA is not available.
RESUMEN
Deep learning (DL) shows notable success in biomedical studies. However, most DL algorithms work as black boxes, exclude biomedical experts, and need extensive data. This is especially problematic for fundamental research in the laboratory, where often only small and sparse data are available and the objective is knowledge discovery rather than automation. Furthermore, basic research is usually hypothesis-driven and extensive prior knowledge (priors) exists. To address this, the Self-Enhancing Multi-Photon Artificial Intelligence (SEMPAI) that is designed for multiphoton microscopy (MPM)-based laboratory research is presented. It utilizes meta-learning to optimize prior (and hypothesis) integration, data representation, and neural network architecture simultaneously. By this, the method allows hypothesis testing with DL and provides interpretable feedback about the origin of biological information in 3D images. SEMPAI performs multi-task learning of several related tasks to enable prediction for small datasets. SEMPAI is applied on an extensive MPM database of single muscle fibers from a decade of experiments, resulting in the largest joint analysis of pathologies and function for single muscle fibers to date. It outperforms state-of-the-art biomarkers in six of seven prediction tasks, including those with scarce data. SEMPAI's DL models with integrated priors are superior to those without priors and to prior-only approaches.
Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Redes Neurales de la Computación , Algoritmos , MúsculosRESUMEN
Recently, algorithms capable of assessing the severity of Coronary Artery Disease (CAD) in form of the Coronary Artery Disease-Reporting and Data System (CAD-RADS) grade from Coronary Computed Tomography Angiography (CCTA) scans using Deep Learning (DL) were proposed. Before considering to apply these algorithms in clinical practice, their robustness regarding different commonly used Computed Tomography (CT)-specific image formation parameters-including denoising strength, slab combination, and reconstruction kernel-needs to be evaluated. For this study, we reconstructed a data set of 500 patient CCTA scans under seven image formation parameter configurations. We select one default configuration and evaluate how varying individual parameters impacts the performance and stability of a typical algorithm for automated CAD assessment from CCTA. This algorithm consists of multiple preprocessing and a DL prediction step. We evaluate the influence of the parameter changes on the entire pipeline and additionally on only the DL step by propagating the centerline extraction results of the default configuration to all others. We consider the standard deviation of the CAD severity prediction grade difference between the default and variation configurations to assess the stability w.r.t. parameter changes. For the full pipeline we observe slight instability (± 0.226 CAD-RADS) for all variations. Predictions are more stable with centerlines propagated from the default to the variation configurations (± 0.122 CAD-RADS), especially for differing denoising strengths (± 0.046 CAD-RADS). However, stacking slabs with sharp boundaries instead of mixing slabs in overlapping regions (called true stack ± 0.313 CAD-RADS) and increasing the sharpness of the reconstruction kernel (± 0.150 CAD-RADS) leads to unstable predictions. Regarding the clinically relevant tasks of excluding CAD (called rule-out; AUC default 0.957, min 0.937) and excluding obstructive CAD (called hold-out; AUC default 0.971, min 0.964) the performance remains on a high level for all variations. Concluding, an influence of reconstruction parameters on the predictions is observed. Especially, scans reconstructed with the true stack parameter need to be treated with caution when using a DL-based method. Also, reconstruction kernels which are underrepresented in the training data increase the prediction uncertainty.
Asunto(s)
Enfermedad de la Arteria Coronaria , Aprendizaje Profundo , Humanos , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/terapia , Angiografía Coronaria/métodos , Tomografía Computarizada por Rayos X , Corazón , Valor Predictivo de las PruebasRESUMEN
PURPOSE: Vessel labeling is a prerequisite for comparing cerebral vasculature across patients, e.g., for straightened vessel examination or for localization. Extracting vessels from computed tomography angiography scans may come with a trade-off in segmentation accuracy. Vessels might be neglected or artificially created, increasing the difficulty of labeling. Related work mainly focuses on magnetic resonance angiography without stroke and uses trainable approaches requiring costly labels. METHODS: We present a robust method to identify major arteries and bifurcations in cerebrovascular models generated from existing segmentations. To localize bifurcations of the Circle of Willis, candidate paths for the adjacent vessels of interest are identified using registered landmarks. From those paths, the optimal ones are extracted by recursively maximizing an objective function for all adjacent vessels starting from a bifurcation to avoid erroneous paths and compensate for stroke. RESULTS: In 100 CTA stroke data sets for evaluation, 6 bifurcation locations are placed correctly in 85% of cases; 92.5% when allowing a margin of 5 mm. On average, 14 vessels of interest are found in 90% of the cases and traced correctly end-to-end in 73.5%. The baseline achieves similar detection rates but only 35.5% of the arteries are traced in full. CONCLUSION: Formulating the vessel labeling process as a maximization task for bifurcation matching can vastly improve accurate vessel tracing. The proposed algorithm only uses simple features and does not require expensive training data.
Asunto(s)
Accidente Cerebrovascular , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Angiografía por Resonancia Magnética/métodos , Algoritmos , Angiografía Cerebral/métodosRESUMEN
During the diagnosis of ischemic strokes, the Circle of Willis and its surrounding vessels are the arteries of interest. Their visualization in case of an acute stroke is often enabled by Computed Tomography Angiography (CTA). Still, the identification and analysis of the cerebral arteries remain time consuming in such scans due to a large number of peripheral vessels which may disturb the visual impression. We propose VirtualDSA++, an algorithm designed to segment and label the cerebrovascular tree on CTA scans. Especially with stroke patients, labeling is a delicate procedure, as in the worst case whole hemispheres may not be present due to impeded perfusion. Hence, we extended the labeling mechanism for the cerebral arteries to identify occluded vessels. In the work at hand, we place the algorithm in a clinical context by evaluating the labeling and occlusion detection on stroke patients, where we have achieved labeling sensitivities comparable to other works between 92% and 95%. To the best of our knowledge, ours is the first work to address labeling and occlusion detection at once, whereby a sensitivity of 67% and a specificity of 81% were obtained for the latter. VirtualDSA++ also automatically segments and models the intracranial system leading to further processing possibilities. We present the generic concept of iterative systematic search for pathways on all nodes of said model, which enables new interactive features. Exemplary, we derive in detail, firstly, the interactive planning of vascular interventions like the mechanical thrombectomy and secondly, the interactive suppression of vessel structures that are not of interest in diagnosing strokes (like veins). We discuss both features as well as further possibilities emerging from the proposed concept.
Asunto(s)
Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Algoritmos , Angiografía Cerebral/métodos , Angiografía por Tomografía Computarizada/métodos , Humanos , Accidente Cerebrovascular/diagnóstico por imagenRESUMEN
PURPOSE: The primary aim was to investigate the diagnostic performance of an Artificial Intelligence (AI) algorithm for pneumoperitoneum detection in patients with acute abdominal pain who underwent an abdominal CT scan. METHOD: This retrospective diagnostic test accuracy study used a consecutive patient cohort from the Acute High-risk Abdominal patient population at Herlev and Gentofte Hospital, Denmark between January 1, 2019 and September 25, 2019. As reference standard, all studies were rated for pneumoperitoneum (subgroups: none, small, medium, and large amounts) by a gastrointestinal radiology consultant. The index test was a novel AI algorithm based on a sliding window approach with a deep recurrent neural network at its core. The primary outcome was the area under the curve (AUC) of the receiver operating characteristic (ROC). RESULTS: Of 331 included patients (median age 68 years (Range 19-100; 180 women)) 31 patients (9%) had pneumoperitoneum (large: 16, moderate: 7, small: 8). The AUC was 0.77 (95% CI 0.66-0.87). At a specificity of 99% (297/300, 95% CI: 97-100%), sensitivity was 52% (16/31, 95% CI 29-65%), and positive likelihood ratio was 52 (95% CI 16-165). When excluding cases with smaller amounts of free air (<0.25 mL) the AUC increased to 0.96 (95% CI 0.89-1.0). At 99% specificity, sensitivity was 81% (13/16) and positive likelihood ratio was 82 (95% CI 27 - 254). CONCLUSIONS: An AI algorithm identified pneumoperitoneum on CT scans in a clinical setting with low sensitivity but very high specificity, supporting its role for ruling in pneumoperitoneum.
Asunto(s)
Abdomen Agudo , Neumoperitoneo , Dolor Abdominal/diagnóstico por imagen , Dolor Abdominal/etiología , Adulto , Anciano , Anciano de 80 o más Años , Inteligencia Artificial , Pruebas Diagnósticas de Rutina , Femenino , Humanos , Persona de Mediana Edad , Neumoperitoneo/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X , Adulto JovenRESUMEN
PURPOSE: In the literature on automated phenotyping of chronic obstructive pulmonary disease (COPD), there is a multitude of isolated classical machine learning and deep learning techniques, mostly investigating individual phenotypes, with small study cohorts and heterogeneous meta-parameters, e.g., different scan protocols or segmented regions. The objective is to compare the impact of different experimental setups, i.e., varying meta-parameters related to image formation and data representation, with the impact of the learning technique for subtyping automation for a variety of phenotypes. The identified associations of these parameters with automation performance and their interactions might be a first step towards a determination of optimal meta-parameters, i.e., a meta-strategy. METHODS: A clinical cohort of 981 patients (53.8 ± 15.1 years, 554 male) was examined. The inspiratory CT images were analyzed to automate the diagnosis of 13 COPD phenotypes given by two radiologists. A benchmark feature set that integrates many quantitative criteria was extracted from the lung and trained a variety of learning algorithms on the first 654 patients (two thirds) and the respective algorithm retrospectively assessed the remaining 327 patients (one third). The automation performance was evaluated by the area under the receiver operating characteristic curve (AUC). 1717 experiments were conducted with varying meta-parameters such as reconstruction kernel, segmented regions and input dimensionality, i.e., number of extracted features. The association of the meta-parameters with the automation performance was analyzed by multivariable general linear model decomposition of the automation performance in the contributions of meta-parameters and the learning technique. RESULTS: The automation performance varied strongly for varying meta-parameters. For emphysema-predominant phenotypes, an AUC of 93%-95% could be achieved for the best meta-configuration. The airways-predominant phenotypes led to a lower performance of 65%-85%, while smooth kernel configurations on average were unexpectedly superior to those with sharp kernels. The performance impact of meta-parameters, even that of often neglected ones like the missing-data imputation, was in general larger than that of the learning technique. Advanced learning techniques like 3D deep learning or automated machine learning yielded inferior automation performance for non-optimal meta-configurations in comparison to simple techniques with suitable meta-configurations. The best automation performance was achieved by a combination of modern learning techniques and a suitable meta-configuration. CONCLUSIONS: Our results indicate that for COPD phenotype automation, study design parameters such as reconstruction kernel and the model input dimensionality should be adapted to the learning technique and may be more important than the technique itself. To achieve optimal automation and prediction results, the interaction between input those meta-parameters and the learning technique should be considered. This might be particularly relevant for the development of specific scan protocols for novel learning algorithms, and towards an understanding of good study design for automated phenotyping.
Asunto(s)
Enfermedad Pulmonar Obstructiva Crónica , Enfisema Pulmonar , Automatización , Humanos , Masculino , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos XRESUMEN
OBJECTIVES: To investigate the prediction of 1-year survival (1-YS) in patients with metastatic colorectal cancer with use of a systematic comparative analysis of quantitative imaging biomarkers (QIBs) based on the geometric and radiomics analysis of whole liver tumor burden (WLTB) in comparison to predictions based on the tumor burden score (TBS), WLTB volume alone, and a clinical model. METHODS: A total of 103 patients (mean age: 61.0 ± 11.2 years) with colorectal liver metastases were analyzed in this retrospective study. Automatic segmentations of WLTB from baseline contrast-enhanced CT images were used. Established biomarkers as well as a standard radiomics model building were used to derive 3 prognostic models. The benefits of a geometric metastatic spread (GMS) model, the Aerts radiomics prior model of the WLTB, and the performance of TBS and WLTB volume alone were assessed. All models were analyzed in both statistical and predictive machine learning settings in terms of AUC. RESULTS: TBS showed the best discriminative performance in a statistical setting to discriminate 1-YS (AUC = 0.70, CI: [0.56, 0.90]). For the machine learning-based prediction for unseen patients, both a model of the GMS of WLTB (0.73, CI: [0.60, 0.84]) and the Aerts radiomics prior model (0.76, CI: [0.65, 0.86]) applied on the WLTB showed a numerically higher predictive performance than TBS (0.68, CI: [0.54, 0.79]), radiomics (0.65, CI: [0.55, 0.78]), WLTB volume alone (0.53, CI: [0.40. 0.66]), or the clinical model (0.56, CI: [0.43, 0.67]). CONCLUSIONS: The imaging-based GMS model may be a first step towards a more fine-grained machine learning extension of the TBS concept for risk stratification in mCRC patients without the vulnerability to technical variance of radiomics. KEY POINTS: ⢠CT-based geometric distribution and radiomics analysis of whole liver tumor burden in metastatic colorectal cancer patients yield prognostic information. ⢠Differences in survival are possibly attributable to the spatial distribution of metastatic lesions and the geometric metastatic spread analysis of all liver metastases may serve as robust imaging biomarker invariant to technical variation. ⢠Imaging-based prediction models outperform clinical models for 1-year survival prediction in metastatic colorectal cancer patients with liver metastases.
Asunto(s)
Neoplasias , Tomografía Computarizada por Rayos X , Anciano , Humanos , Hígado , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Carga TumoralRESUMEN
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
RESUMEN
The goal of radiomics is to convert medical images into a minable data space by extraction of quantitative imaging features for clinically relevant analyses, e.g. survival time prediction of a patient. One problem of radiomics from computed tomography is the impact of technical variation such as reconstruction kernel variation within a study. Additionally, what is often neglected is the impact of inter-patient technical variation, resulting from patient characteristics, even when scan and reconstruction parameters are constant. In our approach, measurements within 3D regions-of-interests (ROI) are calibrated by further ROIs such as air, adipose tissue, liver, etc. that are used as control regions (CR). Our goal is to derive general rules for an automated internal calibration that enhance prediction, based on the analysed features and a set of CRs. We define qualification criteria motivated by status-quo radiomics stability analysis techniques to only collect information from the CRs which is relevant given a respective task. These criteria are used in an optimisation to automatically derive a suitable internal calibration for prediction tasks based on the CRs. Our calibration enhanced the performance for centrilobular emphysema prediction in a COPD study and prediction of patients' one-year-survival in an oncological study.
Asunto(s)
Biomarcadores , Calibración , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Tomografía Computarizada por Rayos X/métodos , Anciano , Enfisema/mortalidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Enfermedad Pulmonar Obstructiva Crónica/mortalidad , Tasa de SupervivenciaRESUMEN
PURPOSE: The application of traditional machine learning techniques, in the form of regression models based on conventional, "hand-crafted" features, to artifact reduction in limited angle tomography is investigated. METHODS: Mean-variation-median (MVM), Laplacian, Hessian, and shift-variant data loss (SVDL) features are extracted from the images reconstructed from limited angle data. The regression models linear regression (LR), multilayer perceptron (MLP), and reduced-error pruning tree (REPTree) are applied to predict artifact images. RESULTS: REPTree learns artifacts best and reaches the smallest root-mean-square error (RMSE) of 29 HU for the Shepp-Logan phantom in a parallel-beam study. Further experiments demonstrate that the MVM and Hessian features complement each other, whereas the Laplacian feature is redundant in the presence of MVM. In fan-beam, the SVDL features are also beneficial. A preliminary experiment on clinical data in a fan-beam study demonstrates that REPTree can reduce some artifacts for clinical data. However, it is not sufficient as a lot of incorrect pixel intensities still remain in the estimated reconstruction images. CONCLUSION: REPTree has the best performance on learning artifacts in limited angle tomography compared with LR and MLP. The features of MVM, Hessian, and SVDL are beneficial for artifact prediction in limited angle tomography. Preliminary experiments on clinical data suggest that the investigation on more features is necessary for clinical applications of REPTree.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Artefactos , Humanos , Fantasmas de ImagenRESUMEN
Rotational coronary angiography using C-arm angiography systems enables intra-procedural 3-D imaging that is considered beneficial for diagnostic assessment and interventional guidance. Despite previous efforts, rotational angiography was not yet successfully established in clinical practice for coronary artery procedures due to challenges associated with substantial intra-scan respiratory and cardiac motion. While gating handles cardiac motion during reconstruction, respiratory motion requires compensation. State-of-the-art algorithms rely on 3-D / 2-D registration that requires an uncompensated reconstruction of sufficient quality. To overcome this limitation, we investigate two prior-free respiratory motion estimation methods based on the optimization of: 1) epipolar consistency conditions (ECCs) and 2) a task-based auto-focus measure (AFM). The methods assess redundancies in projection images or impose favorable properties of 3-D space, respectively, and are used to estimate the respiratory motion of the coronary arteries within rotational angiograms. We evaluate our algorithms on the publicly available CAVAREV benchmark and on clinical data. We quantify reductions in error due to respiratory motion compensation using a dedicated reconstruction domain metric. Moreover, we study the improvements in image quality when using an analytic and a novel temporal total variation regularized algebraic reconstruction algorithm. We observed substantial improvement in all figures of merit compared with the uncompensated case. Improvements in image quality presented as a reduction of double edges, blurring, and noise. Benefits of the proposed corrections were notable even in cases suffering little corruption from respiratory motion, translating to an improvement in the vessel sharpness of (6.08 ± 4.46)% and (14.7 ± 8.80)% when the ECC-based and the AFM-based compensation were applied. On the CAVAREV data, our motion compensation approach exhibits an improvement of (27.6 ± 7.5)% and (97.0 ± 17.7)% when the ECC and AFM were used, respectively. At the time of writing, our method based on AFM is leading the CAVAREV scoreboard. Both motion estimation strategies are purely image-based and accurately estimate the displacements of the coronary arteries due to respiration. While current evidence suggests the superior performance of AFM, future work will further investigate the use of ECC in the context of angiography as they solely rely on geometric calibration and projection-domain images.
Asunto(s)
Angiografía Coronaria/métodos , Imagenología Tridimensional/métodos , Algoritmos , Angiografía de Substracción Digital/métodos , Humanos , Fantasmas de ImagenRESUMEN
PURPOSE: The performance of many state-of-the-art coronary artery centerline reconstruction algorithms in rotational angiography heavily depends on accurate two-dimensional centerline information that, in practice, is not available due to segmentation errors. To alleviate the need for correct segmentations, we propose generic extensions to symbolic centerline reconstruction algorithms that target symmetrization, outlier rejection, and topology recovery on asymmetrically reconstructed point clouds. METHODS: Epipolar geometry- and graph cut-based reconstruction algorithms are used to reconstruct three-dimensional point clouds from centerlines in reference views. These clouds are input to the proposed methods that consist of (a) merging of asymmetric reconstructions, (b) removal of inconsistent three-dimensional points using the reprojection error, and (c) projection domain-informed geodesic computation. We validate our extensions in a numerical phantom study and on two clinical datasets. RESULTS: In the phantom study, the overlap measure between the reconstructed point clouds and the three-dimensional ground truth increased from 68.4 ± 9.6% to 85.9 ± 3.3% when the proposed extensions were applied. In addition, the averaged mean and maximum reprojection error decreased from 4.32 ± 3.03 mm to 0.189 ± 0.182 mm and from 8.39 ± 6.08 mm to 0.392 ± 0.434 mm. For the clinical data, the mean and maximum reprojection error improved from 1.73 ± 0.97 mm to 0.882 ± 0.428 mm and from 3.83 ± 1.87 mm to 1.48 ± 0.61 mm, respectively. CONCLUSIONS: The application of the proposed extensions yielded superior reconstruction quality in all cases and effectively removed erroneously reconstructed points. Future work will investigate possibilities to integrate parts of the proposed extensions directly into reconstruction.
Asunto(s)
Angiografía Coronaria , Procesamiento de Imagen Asistido por Computador/métodos , Rotación , Algoritmos , Humanos , Fantasmas de ImagenRESUMEN
PURPOSE: Detailed analysis of cardiac motion would be helpful for supporting clinical workflow in the interventional suite. With an angiographic C-arm system, multiple heart phases can be reconstructed using electrocardiogram gating. However, the resulting angular undersampling is highly detrimental to the quality of the reconstructed images, especially in nonideal intraprocedural imaging conditions. Motion-compensated reconstruction has previously been shown to alleviate this problem, but it heavily relies on a preliminary reconstruction suitable for motion estimation. In this work, the authors propose a processing pipeline tailored to augment these initial images for the purpose of motion estimation and assess how it affects the final images after motion compensation. METHODS: The following combination of simple, direct methods inspired by the core ideas of existing approaches proved beneficial: (a) Streak reduction by masking high-intensity components in projection domain after filtering. (b) Streak reduction by subtraction of estimated artifact volumes in reconstruction domain. (c) Denoising in spatial domain using a joint bilateral filter guided by an uncompensated reconstruction. (d) Denoising in temporal domain using an adaptive Gaussian smoothing based on a novel motion detection scheme. RESULTS: Experiments on a numerical heart phantom yield a reduction of the relative root-mean-square error from 89.9% to 3.6% and an increase of correlation with the ground truth from 95.763% to 99.995% for the motion-compensated reconstruction when the authors' processing is applied to the initial images. In three clinical patient data sets, the signal-to-noise ratio measured in an ideally homogeneous region is increased by 37.7% on average. Overall visual appearance is improved notably and some anatomical features are more readily discernible. CONCLUSIONS: The authors' findings suggest that the proposed sequence of steps provides a clear advantage over an arbitrary sequence of individual image enhancement methods and is fit to overcome the issue of lacking image quality in motion-compensated C-arm imaging of the heart. As for future work, the obtained results pave the way for investigating how accurately cardiac functional motion parameters can be determined with this modality.