Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Comput Methods Programs Biomed ; 248: 108115, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38503072

RESUMEN

BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.


Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Algoritmos , Neuroimagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Comput Med Imaging Graph ; 112: 102332, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38245925

RESUMEN

Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Imagen por Resonancia Magnética/métodos
3.
Comput Biol Med ; 161: 106973, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37209615

RESUMEN

Cardiac magnetic resonance (CMR) image segmentation is an integral step in the analysis of cardiac function and diagnosis of heart related diseases. While recent deep learning-based approaches in automatic segmentation have shown great promise to alleviate the need for manual segmentation, most of these are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition, which typically occurs in multi-vendor and multi-site settings, as well as pathological data. Such approaches frequently exhibit a degradation in prediction performance, particularly on outlier cases commonly associated with difficult pathologies, artifacts and extensive changes in tissue shape and appearance. In this work, we present a model aimed at segmenting all three cardiac structures in a multi-center, multi-disease and multi-view scenario. We propose a pipeline, addressing different challenges with segmentation of such heterogeneous data, consisting of heart region detection, augmentation through image synthesis and a late-fusion segmentation approach. Extensive experiments and analysis demonstrate the ability of the proposed approach to tackle the presence of outlier cases during both training and testing, allowing for better adaptation to unseen and difficult examples. Overall, we show that the effective reduction of segmentation failures on outlier cases has a positive impact on not only the average segmentation performance, but also on the estimation of clinical parameters, leading to a better consistency in derived metrics.


Asunto(s)
Algoritmos , Cardiopatías , Humanos , Imagen por Resonancia Magnética/métodos , Corazón/diagnóstico por imagen , Radiografía , Procesamiento de Imagen Asistido por Computador/métodos
4.
IEEE Trans Med Imaging ; 42(3): 726-738, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36260571

RESUMEN

One of the limiting factors for the development and adoption of novel deep-learning (DL) based medical image analysis methods is the scarcity of labeled medical images. Medical image simulation and synthesis can provide solutions by generating ample training data with corresponding ground truth labels. Despite recent advances, generated images demonstrate limited realism and diversity. In this work, we develop a flexible framework for simulating cardiac magnetic resonance (MR) images with variable anatomical and imaging characteristics for the purpose of creating a diversified virtual population. We advance previous works on both cardiac MR image simulation and anatomical modeling to increase the realism in terms of both image appearance and underlying anatomy. To diversify the generated images, we define parameters: 1)to alter the anatomy, 2) to assign MR tissue properties to various tissue types, and 3) to manipulate the image contrast via acquisition parameters. The proposed framework is optimized to generate a substantial number of cardiac MR images with ground truth labels suitable for downstream supervised tasks. A database of virtual subjects is simulated and its usefulness for aiding a DL segmentation method is evaluated. Our experiments show that training completely with simulated images can perform comparable with a model trained with real images for heart cavity segmentation in mid-ventricular slices. Moreover, such data can be used in addition to classical augmentation for boosting the performance when training data is limited, particularly by increasing the contrast and anatomical variation, leading to better regularization and generalization. The database is publicly available at https://osf.io/bkzhm/ and the simulation code will be available at https://github.com/sinaamirrajab/CMRI.


Asunto(s)
Corazón , Imagen por Resonancia Magnética , Humanos , Corazón/diagnóstico por imagen , Simulación por Computador
5.
Med Image Anal ; 84: 102688, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36493702

RESUMEN

Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.


Asunto(s)
Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética , Corazón/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos
6.
Comput Med Imaging Graph ; 101: 102123, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36174308

RESUMEN

Synthesis of a large set of high-quality medical images with variability in anatomical representation and image appearance has the potential to provide solutions for tackling the scarcity of properly annotated data in medical image analysis research. In this paper, we propose a novel framework consisting of image segmentation and synthesis based on mask-conditional GANs for generating high-fidelity and diverse Cardiac Magnetic Resonance (CMR) images. The framework consists of two modules: i) a segmentation module trained using a physics-based simulated database of CMR images to provide multi-tissue labels on real CMR images, and ii) a synthesis module trained using pairs of real CMR images and corresponding multi-tissue labels, to translate input segmentation masks to realistic-looking cardiac images. The anatomy of synthesized images is based on labels, whereas the appearance is learned from the training images. We investigate the effects of the number of tissue labels, quantity of training data, and multi-vendor data on the quality of the synthesized images. Furthermore, we evaluate the effectiveness and usability of the synthetic data for a downstream task of training a deep-learning model for cardiac cavity segmentation in the scenarios of data replacement and augmentation. The results of the replacement study indicate that segmentation models trained with only synthetic data can achieve comparable performance to the baseline model trained with real data, indicating that the synthetic data captures the essential characteristics of its real counterpart. Furthermore, we demonstrate that augmenting real with synthetic data during training can significantly improve both the Dice score (maximum increase of 4%) and Hausdorff Distance (maximum reduction of 40%) for cavity segmentation, suggesting a good potential to aid in tackling medical data scarcity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Bases de Datos Factuales , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
7.
Med Image Anal ; 80: 102469, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35640385

RESUMEN

Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.


Asunto(s)
Cardiopatías Congénitas , Redes Neurales de la Computación , Corazón/diagnóstico por imagen , Cardiopatías Congénitas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tórax
8.
Artículo en Inglés | MEDLINE | ID: mdl-31172133

RESUMEN

We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.

9.
Med Phys ; 44(6): 2281-2292, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28276071

RESUMEN

PURPOSE: An aortic valve stenosis is an abnormal narrowing of the aortic valve (AV). It impedes blood flow and is often quantified by the geometric orifice area of the AV (AVA) and the pressure drop (PD). Using the Bernoulli equation, a relation between the PD and the effective orifice area (EOA) represented by the area of the vena contracta (VC) downstream of the AV can be derived. We investigate the relation between the AVA and the EOA using patient anatomies derived from cardiac computed tomography (CT) angiography images and computational fluid dynamic (CFD) simulations. METHODS: We developed a shape-constrained deformable model for segmenting the AV, the ascending aorta (AA), and the left ventricle (LV) in cardiac CT images. In particular, we designed a structured AV mesh model, trained the model on CT scans, and integrated it with an available model for heart segmentation. The planimetric AVA was determined from the cross-sectional slice with minimum AV opening area. In addition, the AVA was determined as the nonobstructed area along the AV axis by projecting the AV leaflet rims on a plane perpendicular to the AV axis. The flow rate was derived from the LV volume change. Steady-state CFD simulations were performed on the patient anatomies resulting from segmentation. RESULTS: Heart and valve segmentation was used to retrospectively analyze 22 cardiac CT angiography image sequences of patients with noncalcified and (partially) severely calcified tricuspid AVs. Resulting AVAs were in the range of 1-4.5 cm2 and ejection fractions (EFs) between 20 and 75%. AVA values computed by projection were smaller than those computed by planimetry, and both were strongly correlated (R2 = 0.995). EOA values computed via the Bernoulli equation from CFD-based PD results were strongly correlated with both AVA values (R2 = 0.97). EOA values were ∼10% smaller than planimetric AVA values. For EOA values < 2.0 cm2 , the EOA was up to ∼15% larger than the projected AVA. CONCLUSIONS: The presented segmentation algorithm allowed to construct detailed AV models for 22 patient cases. Because of the crown-like 3D structure of the AV, the planimetric AVA is larger than the projected AVA formed by the free edges of the AV leaflets. The AVA formed by the free edges of the AV leaflets was smaller than the EOA for EOA values <2.0cm2. This contradiction with respect to previous studies that reported the EOA to be always smaller or equal to the geometric AVA is explained by the more detailed AV models used within this study.


Asunto(s)
Estenosis de la Válvula Aórtica/diagnóstico por imagen , Algoritmos , Válvula Aórtica , Estudios Transversales , Humanos , Tomografía Computarizada por Rayos X
10.
World J Cardiol ; 8(10): 606-614, 2016 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-27847562

RESUMEN

AIM: To investigate the accuracy of a rotational C-arm CT-based 3D heart model to predict an optimal C-arm configuration during transcatheter aortic valve replacement (TAVR). METHODS: Rotational C-arm CT (RCT) under rapid ventricular pacing was performed in 57 consecutive patients with severe aortic stenosis as part of the pre-procedural cardiac catheterization. With prototype software each RCT data set was segmented using a 3D heart model. From that the line of perpendicularity curve was obtained that generates a perpendicular view of the aortic annulus according to the right-cusp rule. To evaluate the accuracy of a model-based overlay we compared model- and expert-derived aortic root diameters. RESULTS: For all 57 patients in the RCT cohort diameter measurements were obtained from two independent operators and were compared to the model-based measurements. The inter-observer variability was measured to be in the range of 0°-12.96° of angular C-arm displacement for two independent operators. The model-to-operator agreement was 0°-13.82°. The model-based and expert measurements of aortic root diameters evaluated at the aortic annulus (r = 0.79, P < 0.01), the aortic sinus (r = 0.93, P < 0.01) and the sino-tubular junction (r = 0.92, P < 0.01) correlated on a high level and the Bland-Altman analysis showed good agreement. The interobserver measurements did not show a significant bias. CONCLUSION: Automatic segmentation of the aortic root using an anatomical model can accurately predict an optimal C-arm configuration, potentially simplifying current clinical workflows before and during TAVR.

11.
Med Image Anal ; 33: 44-49, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27344939

RESUMEN

Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.


Asunto(s)
Diagnóstico por Imagen/métodos , Algoritmos , Diagnóstico por Imagen/normas , Humanos , Aprendizaje Automático , Modelos Anatómicos , Medicina de Precisión , Reproducibilidad de los Resultados
12.
IEEE Trans Med Imaging ; 34(7): 1460-1473, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25667349

RESUMEN

Knowledge of left atrial (LA) anatomy is important for atrial fibrillation ablation guidance, fibrosis quantification and biophysical modelling. Segmentation of the LA from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images is a complex problem. This manuscript presents a benchmark to evaluate algorithms that address LA segmentation. The datasets, ground truth and evaluation code have been made publicly available through the http://www.cardiacatlas.org website. This manuscript also reports the results of the Left Atrial Segmentation Challenge (LASC) carried out at the STACOM'13 workshop, in conjunction with MICCAI'13. Thirty CT and 30 MRI datasets were provided to participants for segmentation. Each participant segmented the LA including a short part of the LA appendage trunk and proximal sections of the pulmonary veins (PVs). We present results for nine algorithms for CT and eight algorithms for MRI. Results showed that methodologies combining statistical models with region growing approaches were the most appropriate to handle the proposed task. The ground truth and automatic segmentations were standardised to reduce the influence of inconsistently defined regions (e.g., mitral plane, PVs end points, LA appendage). This standardisation framework, which is a contribution of this work, can be used to label and further analyse anatomical regions of the LA. By performing the standardisation directly on the left atrial surface, we can process multiple input data, including meshes exported from different electroanatomical mapping systems.

13.
Artículo en Inglés | MEDLINE | ID: mdl-23286025

RESUMEN

Model-based segmentation approaches have been proven to produce very accurate segmentation results while simultaneously providing an anatomic labeling for the segmented structures. However, variations of the anatomy, as they are often encountered e.g. on the drainage pattern of the pulmonary veins to the left atrium, cannot be represented by a single model. Automatic model selection extends the model-based segmentation approach to handling significant variational anatomies without user interaction. Using models for the three most common anatomical variations of the left atrium, we propose a method that uses an estimation of the local fit of different models to select the best fitting model automatically. Our approach employs the support vector machine for the automatic model selection. The method was evaluated on 42 very accurate segmentations of MRI scans using three different models. The correct model was chosen in 88.1% of the cases. In a second experiment, reflecting average segmentation results, the model corresponding to the clinical classification was automatically found in 78.0% of the cases.


Asunto(s)
Atrios Cardíacos/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Modelos Cardiovasculares , Reconocimiento de Normas Patrones Automatizadas/métodos , Máquina de Vectores de Soporte , Algoritmos , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Modelos Anatómicos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
Med Image Comput Comput Assist Interv ; 14(Pt 3): 463-70, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22003732

RESUMEN

With automated image analysis tools entering rapidly the clinical practice, the demands regarding reliability, accuracy, and speed are strongly increasing. Systematic testing approaches to determine optimal parameter settings and to select algorithm design variants become essential in this context. We present an approach to optimize organ localization in a complex segmentation chain consisting of organ localization, parametric organ model adaptation, and deformable adaptation. In particular, we consider the Generalized Hough Transformation (GHT) and 3D heart segmentation in Computed Tomography Angiography (CTA) images. We rate the performance of our GHT variant by the initialization error and by computation time. Systematic parameter testing on a compute cluster allows to identify a parametrization with a good tradeoff between reliability and speed. This is achieved with coarse image sampling, a coarse Hough space resolution and a filtering step that we introduced to remove unspecific edges. Finally we show that optimization of the GHT parametrization results in a segmentation chain with reduced failure rates.


Asunto(s)
Corazón/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Angiografía/métodos , Inteligencia Artificial , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Miocardio/patología , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos
15.
Med Image Anal ; 15(6): 863-76, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21737337

RESUMEN

Recently, model-based methods for the automatic segmentation of the heart chambers have been proposed. An important application of these methods is the characterization of the heart function. Heart models are, however, increasingly used for interventional guidance making it necessary to also extract the attached great vessels. It is, for instance, important to extract the left atrium and the proximal part of the pulmonary veins to support guidance of ablation procedures for atrial fibrillation treatment. For cardiac resynchronization therapy, a heart model including the coronary sinus is needed. We present a heart model comprising the four heart chambers and the attached great vessels. By assigning individual linear transformations to the heart chambers and to short tubular segments building the great vessels, variable sizes of the heart chambers and bending of the vessels can be described in a consistent way. A configurable algorithmic framework that we call adaptation engine matches the heart model automatically to cardiac CT angiography images in a multi-stage process. First, the heart is detected using a Generalized Hough Transformation. Subsequently, the heart chambers are adapted. This stage uses parametric as well as deformable mesh adaptation techniques. In the final stage, segments of the large vascular structures are successively activated and adapted. To optimize the computational performance, the adaptation engine can vary the mesh resolution and freeze already adapted mesh parts. The data used for validation were independent from the data used for model-building. Ground truth segmentations were generated for 37 CT data sets reconstructed at several cardiac phases from 17 patients. Segmentation errors were assessed for anatomical sub-structures resulting in a mean surface-to-surface error ranging 0.50-0.82mm for the heart chambers and 0.60-1.32mm for the parts of the great vessels visible in the images.


Asunto(s)
Aorta Torácica/diagnóstico por imagen , Aorta Torácica/efectos de la radiación , Simulación por Computador , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Arteria Pulmonar/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Venas Cavas/diagnóstico por imagen , Seno Coronario/diagnóstico por imagen , Humanos , Venas Pulmonares/diagnóstico por imagen
16.
IEEE Trans Med Imaging ; 29(2): 260-72, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20129843

RESUMEN

Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3) approximately 2.1 mm compared with registered intra-procedural EAM point clouds. The overall experiments indicate that LAPV surfaces can be automatically segmented intra-procedurally from 3-D RA data with comparable quality relative to meshes derived from pre-procedural CT/MR.


Asunto(s)
Fibrilación Atrial/terapia , Ablación por Catéter/métodos , Angiografía Coronaria/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Intensificación de Imagen Radiográfica/métodos , Radiografía Intervencional/métodos , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética/métodos , Venas Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos
17.
Artículo en Inglés | MEDLINE | ID: mdl-18982590

RESUMEN

Pre-procedural imaging with cardiac CT or MR has become popular for guiding complex electrophysiology procedures such as those used for atrial fibrillation ablation therapy. Electroanatomical mapping and ablation within the left atrium and pulmonary veins (LAPV) is facilitated using such data, however the pre-procedural anatomy can be quite different from that at the time of intervention. Recently, a method for intra-procedural LAPV imaging has been developed based on contrast-enhanced 3-D rotational X-ray angiography (3-D RA). These intraprocedural data now create a compelling need for rapid and automated extraction of the LAPV geometry for catheter guidance. We present a new approach to automatic intra-procedural generation of LAPV surfaces from 3-D RA volumes. Using model-based segmentation, our technique is robust to imaging noise and artifacts typical of 3-D RA imaging, strongly minimizes the user interaction time required for segmentation, and eliminates inter-subject variability. Our findings in 33 patients indicate that intra-procedural LAPV surface models accurately represent the anatomy at the time of intervention and are comparable to pre-procedural models derived from CTA or MRA.


Asunto(s)
Angiografía/métodos , Atrios Cardíacos/diagnóstico por imagen , Imagenología Tridimensional/métodos , Modelos Anatómicos , Modelos Cardiovasculares , Venas Pulmonares/diagnóstico por imagen , Radiografía Intervencional/métodos , Algoritmos , Simulación por Computador , Atrios Cardíacos/cirugía , Humanos , Venas Pulmonares/cirugía , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados , Rotación , Sensibilidad y Especificidad
18.
IEEE Trans Med Imaging ; 27(9): 1189-201, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-18753041

RESUMEN

Automatic image processing methods are a prerequisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.


Asunto(s)
Algoritmos , Inteligencia Artificial , Corazón/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía Computarizada por Rayos X/métodos , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Modelos Anatómicos , Modelos Cardiovasculares , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
Med Image Comput Comput Assist Interv ; 10(Pt 2): 402-10, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-18044594

RESUMEN

We present a fully automatic segmentation algorithm for the whole heart (four chambers, left ventricular myocardium and trunks of the aorta, the pulmonary artery and the pulmonary veins) in cardiac MR image volumes with nearly isotropic voxel resolution, based on shape-constrained deformable models. After automatic model initialization and reorientation to the cardiac axes, we apply a multi-stage adaptation scheme with progressively increasing degrees of freedom. Particular attention is paid to the calibration of the MR image intensities. Detailed evaluation results for the various anatomical heart regions are presented on a database of 42 patients. On calibrated images, we obtain an average segmentation error of 0.76mm.


Asunto(s)
Inteligencia Artificial , Corazón/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
20.
IEEE Trans Inf Technol Biomed ; 10(2): 385-94, 2006 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-16617627

RESUMEN

Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.


Asunto(s)
Equipos de Almacenamiento de Computador , Sistemas de Administración de Bases de Datos/instrumentación , Interpretación de Imagen Asistida por Computador/instrumentación , Imagenología Tridimensional/instrumentación , Sistemas de Información Radiológica , Procesamiento de Señales Asistido por Computador/instrumentación , Interfaz Usuario-Computador , Sistemas de Computación , Diagnóstico por Imagen/instrumentación , Diagnóstico por Imagen/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA