RESUMEN
OBJECTIVES: To develop a prototype algorithm for automatic spine segmentation in MDCT images and use it to automatically detect osteoporotic vertebral fractures. METHODS: Cross-sectional routine thoracic and abdominal MDCT images of 71 patients including 8 males and 9 females with 25 osteoporotic vertebral fractures and longitudinal MDCT images of 9 patients with 18 incidental fractures in the follow-up MDCT were retrospectively selected. The spine segmentation algorithm localised and identified the vertebrae T5-L5. Each vertebra was automatically segmented by using corresponding vertebra surface shape models that were adapted to the original images. Anterior, middle, and posterior height of each vertebra was automatically determined; the anterior-posterior ratio (APR) and middle-posterior ratio (MPR) were computed. As the gold standard, radiologists graded vertebral fractures from T5 to L5 according to the Genant classification in consensus. RESULTS: Using ROC analysis to differentiate vertebrae without versus with prevalent fracture, AUC values of 0.84 and 0.83 were obtained for APR and MPR, respectively (p < 0.001). Longitudinal changes in APR and MPR were significantly different between vertebrae without versus with incidental fracture (ΔAPR: -8.5 % ± 8.6 % versus -1.6 % ± 4.2 %, p = 0.002; ΔMPR: -11.4 % ± 7.7 % versus -1.2 % ± 1.6 %, p < 0.001). CONCLUSIONS: This prototype algorithm may support radiologists in reporting currently underdiagnosed osteoporotic vertebral fractures so that appropriate therapy can be initiated. KEY POINTS: ⢠This spine segmentation algorithm automatically localised, identified, and segmented the vertebrae in MDCT images. ⢠Osteoporotic vertebral fractures could be automatically detected using this prototype algorithm. ⢠The prototype algorithm helps radiologists to report underdiagnosed osteoporotic vertebral fractures.
Asunto(s)
Vértebras Lumbares/lesiones , Tomografía Computarizada Multidetector , Fracturas Osteoporóticas/diagnóstico por imagen , Fracturas de la Columna Vertebral/diagnóstico por imagen , Vértebras Torácicas/lesiones , Anciano , Algoritmos , Estudios Transversales , Femenino , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Prevalencia , Curva ROC , Estudios RetrospectivosRESUMEN
BACKGROUND: In the field of medical imaging, high-resolution (HR) magnetic resonance imaging (MRI) is essential for accurate disease diagnosis and analysis. However, HR imaging is prone to artifacts and is not universally available. Consequently, low-resolution (LR) MRI images are typically acquired. Deep learning (DL)-based super-resolution (SR) techniques can transform LR images into HR quality. However, these techniques require paired HR-LR data for training the SR networks. OBJECTIVE: This research aims to investigate the potential of simulated brain MRI data to train DL-based SR networks. METHODS: We simulated a large set of anatomically diverse, voxel-aligned, and artifact-free brain MRI data at different resolutions. We utilized this simulated data to train four distinct DL-based SR networks and augment their training. The trained networks were then evaluated using real data from various sources. RESULTS: With our trained networks, we produced 0.7mm SR images from standard 1mm resolution multi-source T1w brain MRI. Our experimental results demonstrate that the trained networks significantly enhance the sharpness of LR input MR images. For single-source images, the performance of networks trained solely on simulated data is slightly inferior to those trained solely on real data, with an average structural similarity index (SSIM) difference of 0.025. However, networks augmented with simulated data outperform those trained on single-source real data when evaluated across datasets from multiple sources. CONCLUSION: Paired HR-LR simulated brain MRI data is suitable for training and augmenting diverse brain MRI SR networks. Augmenting the training data with simulated data can enhance the generalizability of the SR networks across real datasets from multiple sources.
RESUMEN
BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.
Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Algoritmos , Neuroimagen/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.
Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Imagen por Resonancia Magnética/métodosRESUMEN
PURPOSE: We developed and tested a neural network for automated detection and stability analysis of vertebral body fractures on computed tomography (CT). MATERIALS AND METHODS: 257 patients who underwent CT were included in this Institutional Review Board (IRB) approved study. 463 fractured and 1883 non-fractured vertebral bodies were included, with 190 fractures unstable. Two readers identified vertebral body fractures and assessed their stability. A combination of a Hierarchical Convolutional Neural Network (hNet) and a fracture Classification Network (fNet) was used to build a neural network for the automated detection and stability analysis of vertebral body fractures on CT. Two final test settings were chosen: one with vertebral body levels C1/2 included and one where they were excluded. RESULTS: The mean age of the patients was 68 ± 14 years. 140 patients were female. The network showed a slightly higher diagnostic performance when excluding C1/2. Accordingly, the network was able to distinguish fractured and non-fractured vertebral bodies with a sensitivity of 75.8 % and a specificity of 80.3 %. Additionally, the network determined the stability of the vertebral bodies with a sensitivity of 88.4 % and a specificity of 80.3 %. The AUC was 87 % and 91 % for fracture detection and stability analysis, respectively. The sensitivity of our network in indicating the presence of at least one fracture / one unstable fracture within the whole spine achieved values of 78.7 % and 97.2 %, respectively, when excluding C1/2. CONCLUSION: The developed neural network can automatically detect vertebral body fractures and evaluate their stability concurrently with a high diagnostic performance.
Asunto(s)
Fracturas de la Columna Vertebral , Cuerpo Vertebral , Humanos , Femenino , Persona de Mediana Edad , Anciano , Anciano de 80 o más Años , Masculino , Estudios Retrospectivos , Columna Vertebral , Fracturas de la Columna Vertebral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Inteligencia ArtificialRESUMEN
Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.
Asunto(s)
Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética , Corazón/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Cardiac magnetic resonance (CMR) image segmentation is an integral step in the analysis of cardiac function and diagnosis of heart related diseases. While recent deep learning-based approaches in automatic segmentation have shown great promise to alleviate the need for manual segmentation, most of these are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition, which typically occurs in multi-vendor and multi-site settings, as well as pathological data. Such approaches frequently exhibit a degradation in prediction performance, particularly on outlier cases commonly associated with difficult pathologies, artifacts and extensive changes in tissue shape and appearance. In this work, we present a model aimed at segmenting all three cardiac structures in a multi-center, multi-disease and multi-view scenario. We propose a pipeline, addressing different challenges with segmentation of such heterogeneous data, consisting of heart region detection, augmentation through image synthesis and a late-fusion segmentation approach. Extensive experiments and analysis demonstrate the ability of the proposed approach to tackle the presence of outlier cases during both training and testing, allowing for better adaptation to unseen and difficult examples. Overall, we show that the effective reduction of segmentation failures on outlier cases has a positive impact on not only the average segmentation performance, but also on the estimation of clinical parameters, leading to a better consistency in derived metrics.
Asunto(s)
Algoritmos , Cardiopatías , Humanos , Imagen por Resonancia Magnética/métodos , Corazón/diagnóstico por imagen , Radiografía , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
One of the limiting factors for the development and adoption of novel deep-learning (DL) based medical image analysis methods is the scarcity of labeled medical images. Medical image simulation and synthesis can provide solutions by generating ample training data with corresponding ground truth labels. Despite recent advances, generated images demonstrate limited realism and diversity. In this work, we develop a flexible framework for simulating cardiac magnetic resonance (MR) images with variable anatomical and imaging characteristics for the purpose of creating a diversified virtual population. We advance previous works on both cardiac MR image simulation and anatomical modeling to increase the realism in terms of both image appearance and underlying anatomy. To diversify the generated images, we define parameters: 1)to alter the anatomy, 2) to assign MR tissue properties to various tissue types, and 3) to manipulate the image contrast via acquisition parameters. The proposed framework is optimized to generate a substantial number of cardiac MR images with ground truth labels suitable for downstream supervised tasks. A database of virtual subjects is simulated and its usefulness for aiding a DL segmentation method is evaluated. Our experiments show that training completely with simulated images can perform comparable with a model trained with real images for heart cavity segmentation in mid-ventricular slices. Moreover, such data can be used in addition to classical augmentation for boosting the performance when training data is limited, particularly by increasing the contrast and anatomical variation, leading to better regularization and generalization. The database is publicly available at https://osf.io/bkzhm/ and the simulation code will be available at https://github.com/sinaamirrajab/CMRI.
Asunto(s)
Corazón , Imagen por Resonancia Magnética , Humanos , Corazón/diagnóstico por imagen , Simulación por ComputadorRESUMEN
Objective.In the context of primary in-hospital trauma management timely reading of computed tomography (CT) images is critical. However, assessment of the spine is time consuming, fractures can be very subtle, and the potential for under-diagnosis or delayed diagnosis is relevant. Artificial intelligence is increasingly employed to assist radiologists with the detection of spinal fractures and prioritization of cases. Currently, algorithms focusing on the cervical spine are commercially available. A common approach is the vertebra-wise classification. Instead of a classification task, we formulate fracture detection as a segmentation task aiming to find and display all individual fracture locations presented in the image.Approach.Based on 195 CT examinations, 454 cervical spine fractures were identified and annotated by radiologists at a tertiary trauma center. We trained for the detection a U-Net via four-fold-cross validation to segment spine fractures and the spine via a multi-task loss. We further compared advantages of two image reformation approaches-straightened curved planar reformatted (CPR) around the spine and spinal canal aligned volumes of interest (VOI)-to achieve a unified vertebral alignment in comparison to processing the Cartesian data directly.Main results.Of the three data versions (Cartesian, reformatted, VOI) the VOI approach showed the best detection rate and a reduced computation time. The proposed algorithm was able to detect 87.2% of cervical spine fractures at an average number of false positives of 3.5 per case. Evaluation of the method on a public spine dataset resulted in 0.9 false positive detections per cervical spine case.Significance.The display of individual fracture locations as provided with high sensitivity by the proposed voxel classification based fracture detection has the potential to support the trauma CT reading workflow by reducing missed findings.
Asunto(s)
Fracturas de la Columna Vertebral , Humanos , Fracturas de la Columna Vertebral/diagnóstico por imagen , Inteligencia Artificial , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Vértebras Cervicales/diagnóstico por imagen , Estudios RetrospectivosRESUMEN
Synthesis of a large set of high-quality medical images with variability in anatomical representation and image appearance has the potential to provide solutions for tackling the scarcity of properly annotated data in medical image analysis research. In this paper, we propose a novel framework consisting of image segmentation and synthesis based on mask-conditional GANs for generating high-fidelity and diverse Cardiac Magnetic Resonance (CMR) images. The framework consists of two modules: i) a segmentation module trained using a physics-based simulated database of CMR images to provide multi-tissue labels on real CMR images, and ii) a synthesis module trained using pairs of real CMR images and corresponding multi-tissue labels, to translate input segmentation masks to realistic-looking cardiac images. The anatomy of synthesized images is based on labels, whereas the appearance is learned from the training images. We investigate the effects of the number of tissue labels, quantity of training data, and multi-vendor data on the quality of the synthesized images. Furthermore, we evaluate the effectiveness and usability of the synthetic data for a downstream task of training a deep-learning model for cardiac cavity segmentation in the scenarios of data replacement and augmentation. The results of the replacement study indicate that segmentation models trained with only synthetic data can achieve comparable performance to the baseline model trained with real data, indicating that the synthetic data captures the essential characteristics of its real counterpart. Furthermore, we demonstrate that augmenting real with synthetic data during training can significantly improve both the Dice score (maximum increase of 4%) and Hausdorff Distance (maximum reduction of 40%) for cavity segmentation, suggesting a good potential to aid in tackling medical data scarcity.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Bases de Datos Factuales , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
PURPOSE: A novel pulmonary ventilation imaging technique based on four-dimensional (4D) CT has advantages over existing techniques and could be used for functional avoidance in radiotherapy. There are various deformable image registration (DIR) algorithms and two classes of ventilation metric that can be used for 4D-CT ventilation imaging, each yielding different images. The purpose of this study was to quantify the variability of the 4D-CT ventilation to DIR algorithms and metrics. METHODS: 4D-CT ventilation images were created for 12 patients using different combinations of two DIR algorithms, volumetric (DIR(vol)) and surface-based (DIR(sur)), yielding two displacement vector fields (DVFs) per patient (DVF(voI) and DVF(sur)), and two metrics, Hounsfield unit (HU) change (V(HU)) and Jacobian determinant of deformation (V(Jac)), yielding four ventilation image sets (V(HU)(vol), V(HU)(sur), V(Jac)(voI), and V(Jac)(sur). First DVF(vol) and DVF(sur) were compared visually and quantitatively to the length of 3D displacement vector difference. Second, four ventilation images were compared based on voxel-based Spearman's rank correlation coefficients and coefficients of variation as a measure of spatial heterogeneity. V(HU)(vol) was chosen as the reference for the comparison. RESULTS: The mean length of 3D vector difference between DVF(vol) and DVF(sur) was 2.0 +/- 1.1 mm on average, which was smaller than the voxel dimension of the image set and the variations. Visually, the reference V(HU)(vol) demonstrated similar regional distributions with V(HU)(sur); the reference, however, was markedly different from V(Jac)(vol) and V((Jac)(sur). The correlation coefficients of V(HU)(vol) with V(HU)(sur), V(Jac)(vol) and V(Jac)(sur) were 0.77 +/- 0.06, 0.25 +/- 0.06 and 0.15 +/- 0.07, respectively, indicating that the metric introduced larger variations in the ventilation images than the DIR algorithm. The spatial heterogeneities for V(HU)(vol), 'V(HU)(sur), V(Jac)(vol), and V(Jac)(sur) were 1.8 +/- 1.6, 1.8 +/- 1.5 (p = 0. 85), 0.6 +/- 0.2 (p = 0.02), and 0.7 +/- 0.2 (p = 0.03), respectively, also demonstrating that the metric introduced larger variations. CONCLUSIONS: 4D-CT pulmonary ventilation images vary widely with DIR algorithms and metrics. Careful physiologic validation to determine the appropriate DIR algorithm and metric is needed prior to its applications.
Asunto(s)
Algoritmos , Tomografía Computarizada Cuatridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Ventilación Pulmonar , Anciano , Anciano de 80 o más Años , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios RetrospectivosRESUMEN
OBJECTIVE: The goal of this study was to develop and validate a system for automatic segmentation of the spine, pedicle identification, and screw path suggestion for use with an intraoperative 3D surgical navigation system. METHODS: Cone-beam CT (CBCT) images of the spines of 21 cadavers were obtained. An automated model-based approach was used for segmentation. Using machine learning methodology, the algorithm was trained and validated on the image data sets. For measuring accuracy, surface area errors of the automatic segmentation were compared to the manually outlined reference surface on CBCT. To further test both technical and clinical accuracy, the algorithm was applied to a set of 20 clinical cases. The authors evaluated the system's accuracy in pedicle identification by measuring the distance between the user-defined midpoint of each pedicle and the automatically segmented midpoint. Finally, 2 independent surgeons performed a qualitative evaluation of the segmentation to judge whether it was adequate to guide surgical navigation and whether it would have resulted in a clinically acceptable pedicle screw placement. RESULTS: The clinically relevant pedicle identification and automatic pedicle screw planning accuracy was 86.1%. By excluding patients with severe spinal deformities (i.e., Cobb angle > 75° and severe spinal degeneration) and previous surgeries, a success rate of 95.4% was achieved. The mean time (± SD) for automatic segmentation and screw planning in 5 vertebrae was 11 ± 4 seconds. CONCLUSIONS: The technology investigated has the potential to aid surgeons in navigational planning and improve surgical navigation workflow while maintaining patient safety.
Asunto(s)
Tomografía Computarizada de Haz Cónico , Imagenología Tridimensional/métodos , Tornillos Pediculares , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos , Estudios Retrospectivos , Curvaturas de la Columna Vertebral/diagnóstico por imagen , Curvaturas de la Columna Vertebral/cirugíaRESUMEN
Proton-density fat fraction (PDFF) of the paraspinal muscles, derived from chemical shift encoding-based water-fat magnetic resonance imaging, has emerged as an important surrogate biomarker in individuals with intervertebral disc disease, osteoporosis, sarcopenia and neuromuscular disorders. However, quantification of paraspinal muscle PDFF is currently limited in clinical routine due to the required time-consuming manual segmentation procedure. The present study aimed to develop an automatic segmentation algorithm of the lumbar paraspinal muscles based on water-fat sequences and compare the performance of this algorithm to ground truth data based on manual segmentation. The algorithm comprised an average shape model, a dual feature model, associating each surface point with a fat and water image appearance feature, and a detection model. Right and left psoas, quadratus lumborum and erector spinae muscles were automatically segmented. Dice coefficients averaged over all six muscle compartments amounted to 0.83 (range 0.75-0.90).
RESUMEN
Local variations in bone loss may be of great importance to individually predict osteoporotic fractures but are neglected by current densitometry techniques. The purpose of this study was to evaluate regional variations of normal bone loss at the spine among different age groups using voxel-based morphometry. Non-contrast MDCT scans of 16 patients under the age of 40 (mean age 26years) without spinal pathology were identified as a reference cohort, where each thoracolumbar vertebra was assessed individually. For comparison, 38 patients >40years were grouped by decades in 4 cohorts of 10 patients each, except the youngest, including 8 patients only. All spines were automatically detected, segmented and non-rigidly registered for spatially normalized vertebral bodies. Afterwards, statistical and T-score mapping was performed to highlight local density differences in comparison to the reference cohort. The calculated statistical maps of significantly affected density regions (ADR) started to highlight small local changes of volumetric bone mineral density (vBMD) distribution within the vertebra of L5 (ADR: 7.9%) in the fifties cohort. Regions near the endplates were most affected. The effect dramatically increased in the sixties cohort, where bone loss was most prominent from T12 to L2. In the seventies cohort, around 50% of voxels in T10 to L5 showed significantly decreased vBMD. In conclusion, ADR and local T-score maps of the spine showed age-related local variations in a healthy population, corresponding to known areas of fracture origination and increased fracture incidence. It thus might provide a powerful tool in diagnosis of osteoporosis.
Asunto(s)
Imagenología Tridimensional/métodos , Osteoporosis/diagnóstico por imagen , Columna Vertebral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adulto , Anciano , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Masculino , Persona de Mediana Edad , Estudios RetrospectivosRESUMEN
Domain knowledge about the geometrical properties of cardiac structures is an important ingredient for the segmentation of these structures in medical images or for the simulation of cardiac physiology. So far, a strong focus was put on the left ventricle due to its importance for the general pumping performance of the heart and related functional indices. However, other cardiac structures are of similar importance, e.g., the coronary arteries with respect to diagnosis and treatment of arteriosclerosis or the left atrium with respect to the treatment of atrial fibrillation. In this paper we describe the generation of a geometric cardiac model including the four cardiac chambers and the trunks of the connected vasculature, as well as the coronary arteries and a set of cardiac landmarks. A mean geometric model for the end-diastolic heart has been built based on 27 cardiac CT datasets and has been evaluated with respect to its capability to estimate the position of cardiac structures. Allowing a similarity transformation to adapt the model to image data, cardiac surface positions can be predicted with an accuracy of below 5mm.
Asunto(s)
Corazón/anatomía & histología , Corazón/diagnóstico por imagen , Modelos Anatómicos , Modelos Cardiovasculares , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Simulación por Computador , Humanos , Imagenología TridimensionalRESUMEN
PURPOSE: Many medical imaging tasks require the detection and localization of anatomical landmarks, for example for the initialization of model-based segmentation or to detect anatomical regions present in an image. A large number of landmark and object localization methods have been described in the literature. The detection of single landmarks may be insufficient to achieve robust localization across a variety of imaging settings and subjects. Furthermore, methods like the generalized Hough transform yield the most likely location of an object, but not an indication whether or not the landmark was actually present in the image. METHODS: For these reasons, we developed a simple and computationally efficient method combining localization results from multiple landmarks to achieve robust localization and to compute a localization confidence measure. For each anatomical region, we train a constellation model indicating the mean relative locations and location variability of a set of landmarks. This model is registered to the landmarks detected in a test image via point-based registration, using closed-form solutions. Three different outlier suppression schemes are compared, two using iterative re-weighting based on the residual landmark registration errors and the third being a variant of RANSAC. The mean weighted residual registration error serves as a confidence measure to distinguish true from false localization results. The method is optimized and evaluated on synthetic data, evaluating both the localization accuracy and the ability to classify good from bad registration results based on the residual registration error. RESULTS: Two application examples are presented: the identification of the imaged anatomical region in trauma CT scans and the initialization of model-based segmentation for C-arm CT scans with different target regions. The identification of the target region with the presented method was in 96 % of the cases correct. CONCLUSION: The presented method is a simple solution for combining multiple landmark localization results. With appropriate parameters, outlier suppression clearly improves the localization performance over model registration without outlier suppression. The optimum choice of method and parameters depends on the expected level of noise and outliers in the application at hand, as well as on the focus on localization, classification, or both. The method allows detecting and localizing anatomical fields of view in medical images and is well suited to support a wide range of applications comprising image content identification, anatomical navigation and visualization, or initializing the pose of organ shape models.
Asunto(s)
Algoritmos , Puntos Anatómicos de Referencia/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Heridas y Lesiones/diagnóstico por imagen , HumanosRESUMEN
Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.
Asunto(s)
Diagnóstico por Imagen/métodos , Algoritmos , Diagnóstico por Imagen/normas , Humanos , Aprendizaje Automático , Modelos Anatómicos , Medicina de Precisión , Reproducibilidad de los ResultadosRESUMEN
Prone-to-supine breast image registration has potential application in the fields of surgical and radiotherapy planning, image guided interventions, and multi-modal cancer diagnosis, staging, and therapy response prediction. However, breast image registration of three dimensional images acquired in different patient positions is a challenging problem, due to large deformations induced to the soft breast tissue caused by the change in gravity loading. We present a symmetric, biomechanical simulation based registration framework which aligns the images in a central, virtually unloaded configuration. The breast tissue is modelled as a neo-Hookean material and gravity is considered as the main source of deformation in the original images. In addition to gravity, our framework successively applies image derived forces directly into the unloading simulation in place of a subsequent image registration step. This results in a biomechanically constrained deformation. Using a finite difference scheme avoids an explicit meshing step and enables simulations to be performed directly in the image space. The explicit time integration scheme allows the motion at the interface between chest and breast to be constrained along the chest wall. The feasibility and accuracy of the approach presented here was assessed by measuring the target registration error (TRE) using a numerical phantom with known ground truth deformations, nine clinical prone MRI and supine CT image pairs, one clinical prone-supine CT image pair and four prone-supine MRI image pairs. The registration reduced the mean TRE for the numerical phantom experiment from initially 19.3 to 0.9 mm and the combined mean TRE for all fourteen clinical data sets from 69.7 to 5.6 mm.
Asunto(s)
Mama , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Mamografía , Tomografía Computarizada por Rayos X , Femenino , Humanos , Posición Prona , Posición SupinaRESUMEN
PURPOSE: 4-dimensional computed tomography (4D-CT)-based pulmonary ventilation imaging is an emerging functional imaging modality. The purpose of this study was to investigate the physiological significance of 4D-CT ventilation imaging by comparison with pulmonary function test (PFT) measurements and single-photon emission CT (SPECT) ventilation images, which are the clinical references for global and regional lung function, respectively. METHODS AND MATERIALS: In an institutional review board-approved prospective clinical trial, 4D-CT imaging and PFT and/or SPECT ventilation imaging were performed in thoracic cancer patients. Regional ventilation (V4DCT) was calculated by deformable image registration of 4D-CT images and quantitative analysis for regional volume change. V4DCT defect parameters were compared with the PFT measurements (forced expiratory volume in 1 second (FEV1; % predicted) and FEV1/forced vital capacity (FVC; %). V4DCT was also compared with SPECT ventilation (VSPECT) to (1) test whether V4DCT in VSPECT defect regions is significantly lower than in nondefect regions by using the 2-tailed t test; (2) to quantify the spatial overlap between V4DCT and VSPECT defect regions with Dice similarity coefficient (DSC); and (3) to test ventral-to-dorsal gradients by using the 2-tailed t test. RESULTS: Of 21 patients enrolled in the study, 18 patients for whom 4D-CT and either PFT or SPECT were acquired were included in the analysis. V4DCT defect parameters were found to have significant, moderate correlations with PFT measurements. For example, V4DCT(HU) defect volume increased significantly with decreasing FEV1/FVC (R=-0.65, P<.01). V4DCT in VSPECT defect regions was significantly lower than in nondefect regions (mean V4DCT(HU) 0.049 vs 0.076, P<.01). The average DSCs for the spatial overlap with SPECT ventilation defect regions were only moderate (V4DCT(HU)0.39 ± 0.11). Furthermore, ventral-to-dorsal gradients of V4DCT were strong (V4DCT(HU) R(2) = 0.69, P=.08), which was similar to VSPECT (R(2) = 0.96, P<.01). CONCLUSIONS: An 18-patient study demonstrated significant correlations between 4D-CT ventilation and PFT measurements as well as SPECT ventilation, providing evidence toward the validation of 4D-CT ventilation imaging.
Asunto(s)
Tomografía Computarizada Cuatridimensional , Neoplasias Pulmonares/fisiopatología , Ventilación Pulmonar/fisiología , Pruebas de Función Respiratoria , Tomografía Computarizada de Emisión de Fotón Único , Anciano , Femenino , Humanos , Masculino , Estudios Prospectivos , Radiofármacos , Pentetato de Tecnecio Tc 99mRESUMEN
PURPOSE: This paper proposes the discriminative generalized Hough transform (DGHT) as an efficient and reliable means for object localization in medical images. It is meant to give a deeper insight into the underlying theory and a comprehensive overview of the methodology and the scope of applications. METHODS: The DGHT combines the generalized Hough transform (GHT) with a discriminative training technique for the GHT models to obtain more efficient and robust localization results. To this end, the model points are equipped with individual weights, which are trained discriminatively with respect to a minimal localization error. Through this weighting, the models become more robust since the training focuses on common features of the target object over a set of training images. Unlike other weighting strategies, our training algorithm focuses on the error rate and allows for negative weights, which can be employed to encode rivaling structures into the model. The basic algorithm is presented here in conjunction with several extensions for fully automatic and faster processing. These include: (1) the automatic generation of models from training images and their iterative refinement, (2) the training of joint models for similar objects, and (3) a multi-level approach. RESULTS: The algorithm is tested successfully for the knee in long-leg radiographs (97.6 % success rate), the vertebrae in C-arm CT (95.5 % success rate), and the femoral head in whole-body MR (100 % success rate). In addition, it is compared to Hough forests (Gall et al. in IEEE Trans Pattern Anal Mach Intell 33(11):2188-2202, 2011) for the task of knee localization (97.8 % success rate). Conclusion The DGHT has proven to be a general procedure, which can be easily applied to various tasks with high success rates.