Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Radiol Artif Intell ; 6(5): e230521, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39166972

RESUMEN

Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Interpretación de Imagen Asistida por Computador/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Próstata/patología , Imagen por Resonancia Magnética/métodos
2.
Artículo en Inglés | MEDLINE | ID: mdl-39059508

RESUMEN

PURPOSE: The purpose of this study was to investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on magnetic resonance imaging (MRI). METHODS AND MATERIALS: Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss, or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥2 mm on 3 Dimensional (3D) post-Gd T1-weighted MRI volumes using 2092 patients from 7 institutions (1712, 195, and 185 patients for training, validation, and testing, respectively). Gross tumor volumes of BM delineated by physicians for stereotactic radiosurgery were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create gross tumor volumes of uncontoured BM by 2 radiologists to improve the accuracy of ground truth. The training data set was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient, 95-percentile Hausdorff distance, and average Hausdorff distance (HD). The performances were assessed across different BM sizes. Additional testing was performed using a second data set of 206 patients. RESULTS: Of the 6 nnU-Net systems, the nnU-Net with adaptive Dice loss achieved the best detection and segmentation performance on the first testing data set. At an FP rate of 0.65 ± 1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥0.1 cm3, and 0.824 for BM <0.1 cm3. Mean values of Dice similarity coefficient, 95-percentile Hausdorff distance, and average HD of all detected BMs were 0.758, 1.45, and 0.23 mm, respectively. Performances on the second testing data set achieved a sensitivity of 0.907 at an FP rate of 0.57 ± 0.85 for all BM sizes, and an average HD of 0.33 mm for all detected BM. CONCLUSIONS: Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and stereotactic radiosurgery planning will be investigated.

3.
J Med Imaging (Bellingham) ; 9(6): 064503, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36466078

RESUMEN

Purpose: Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach: Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results: We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions: The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).

4.
Invest Radiol ; 56(10): 605-613, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33787537

RESUMEN

OBJECTIVE: The aim of this study was to evaluate the effect of a deep learning based computer-aided diagnosis (DL-CAD) system on radiologists' interpretation accuracy and efficiency in reading biparametric prostate magnetic resonance imaging scans. MATERIALS AND METHODS: We selected 100 consecutive prostate magnetic resonance imaging cases from a publicly available data set (PROSTATEx Challenge) with and without histopathologically confirmed prostate cancer. Seven board-certified radiologists were tasked to read each case twice in 2 reading blocks (with and without the assistance of a DL-CAD), with a separation between the 2 reading sessions of at least 2 weeks. Reading tasks were to localize and classify lesions according to Prostate Imaging Reporting and Data System (PI-RADS) v2.0 and to assign a radiologist's level of suspicion score (scale from 1-5 in 0.5 increments; 1, benign; 5, malignant). Ground truth was established by consensus readings of 3 experienced radiologists. The detection performance (receiver operating characteristic curves), variability (Fleiss κ), and average reading time without DL-CAD assistance were evaluated. RESULTS: The average accuracy of radiologists in terms of area under the curve in detecting clinically significant cases (PI-RADS ≥4) was 0.84 (95% confidence interval [CI], 0.79-0.89), whereas the same using DL-CAD was 0.88 (95% CI, 0.83-0.94) with an improvement of 4.4% (95% CI, 1.1%-7.7%; P = 0.010). Interreader concordance (in terms of Fleiss κ) increased from 0.22 to 0.36 (P = 0.003). Accuracy of radiologists in detecting cases with PI-RADS ≥3 was improved by 2.9% (P = 0.10). The median reading time in the unaided/aided scenario was reduced by 21% from 103 to 81 seconds (P < 0.001). CONCLUSIONS: Using a DL-CAD system increased the diagnostic accuracy in detecting highly suspicious prostate lesions and reduced both the interreader variability and the reading time.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Computadores , Humanos , Imagen por Resonancia Magnética , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Radiólogos , Estudios Retrospectivos
5.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32966215

RESUMEN

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Detección Precoz del Cáncer , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
6.
Diagnostics (Basel) ; 10(11)2020 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-33202680

RESUMEN

BACKGROUND: Opportunistic prostate cancer (PCa) screening is a controversial topic. Magnetic resonance imaging (MRI) has proven to detect prostate cancer with a high sensitivity and specificity, leading to the idea to perform an image-guided prostate cancer (PCa) screening; Methods: We evaluated a prospectively enrolled cohort of 49 healthy men participating in a dedicated image-guided PCa screening trial employing a biparametric MRI (bpMRI) protocol consisting of T2-weighted (T2w) and diffusion weighted imaging (DWI) sequences. Datasets were analyzed both by human readers and by a fully automated artificial intelligence (AI) software using deep learning (DL). Agreement between the algorithm and the reports-serving as the ground truth-was compared on a per-case and per-lesion level using metrics of diagnostic accuracy and k statistics; Results: The DL method yielded an 87% sensitivity (33/38) and 50% specificity (5/10) with a k of 0.42. 12/28 (43%) Prostate Imaging Reporting and Data System (PI-RADS) 3, 16/22 (73%) PI-RADS 4, and 5/5 (100%) PI-RADS 5 lesions were detected compared to the ground truth. Targeted biopsy revealed PCa in six participants, all correctly diagnosed by both the human readers and AI. CONCLUSIONS: The results of our study show that in our AI-assisted, image-guided prostate cancer screening the software solution was able to identify highly suspicious lesions and has the potential to effectively guide the targeted-biopsy workflow.

7.
J Thorac Imaging ; 35 Suppl 1: S11-S16, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32205816

RESUMEN

In this review article, the current and future impact of artificial intelligence (AI) technologies on diagnostic imaging is discussed, with a focus on cardio-thoracic applications. The processing of imaging data is described at 4 levels of increasing complexity and wider implications. At the examination level, AI aims at improving, simplifying, and standardizing image acquisition and processing. Systems for AI-driven automatic patient iso-centering before a computed tomography (CT) scan, patient-specific adaptation of image acquisition parameters, and creation of optimized and standardized visualizations, for example, automatic rib-unfolding, are discussed. At the reading and reporting levels, AI focuses on automatic detection and characterization of features and on automatic measurements in the images. A recently introduced AI system for chest CT imaging is presented that reports specific findings such as nodules, low-attenuation parenchyma, and coronary calcifications, including automatic measurements of, for example, aortic diameters. At the prediction and prescription levels, AI focuses on risk prediction and stratification, as opposed to merely detecting, measuring, and quantifying images. An AI-based approach for individualizing radiation dose in lung stereotactic body radiotherapy is discussed. The digital twin is presented as a concept of individualized computational modeling of human physiology, with AI-based CT-fractional flow reserve modeling as a first example. Finally, at the cohort and population analysis levels, the focus of AI shifts from clinical decision-making to operational decisions.


Asunto(s)
Inteligencia Artificial , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos , Tomografía Computarizada por Rayos X/tendencias , Carga de Trabajo
8.
Comput Med Imaging Graph ; 75: 24-33, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31129477

RESUMEN

Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Curriculum , Aprendizaje Profundo , Educación Médica , Electrocardiografía , Humanos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Ultrasonografía
9.
Int J Comput Assist Radiol Surg ; 12(9): 1543-1559, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28097603

RESUMEN

PURPOSE: We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). METHODS: The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. RESULTS: To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. CONCLUSION: This enables an end-to-end preclinical validation framework that considers the available dataset.


Asunto(s)
Ablación por Catéter/métodos , Neoplasias Hepáticas/cirugía , Hígado/cirugía , Animales , Hemodinámica , Modelos Animales , Necrosis/cirugía , Porcinos
10.
Med Image Anal ; 33: 19-26, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27349829

RESUMEN

Medical images constitute a source of information essential for disease diagnosis, treatment and follow-up. In addition, due to its patient-specific nature, imaging information represents a critical component required for advancing precision medicine into clinical practice. This manuscript describes recently developed technologies for better handling of image information: photorealistic visualization of medical images with Cinematic Rendering, artificial agents for in-depth image understanding, support for minimally invasive procedures, and patient-specific computational models with enhanced predictive power. Throughout the manuscript we will analyze the capabilities of such technologies and extrapolate on their potential impact to advance the quality of medical care, while reducing its cost.


Asunto(s)
Diagnóstico por Imagen/tendencias , Medicina de Precisión/tendencias , Algoritmos , Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Diagnóstico por Imagen/economía , Humanos , Procedimientos Quirúrgicos Mínimamente Invasivos
11.
IEEE Trans Med Imaging ; 34(7): 1576-1589, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30132760

RESUMEN

Radiofrequency ablation (RFA) is an established treatment for liver cancer when resection is not possible. Yet, its optimal delivery is challenged by the presence of large blood vessels and the time-varying thermal conductivity of biological tissue. Incomplete treatment and an increased risk of recurrence are therefore common. A tool that would enable the accurate planning of RFA is hence necessary. This manuscript describes a new method to compute the extent of ablation required based on the Lattice Boltzmann Method (LBM) and patient-specific, pre-operative images. A detailed anatomical model of the liver is obtained from volumetric images. Then a computational model of heat diffusion, cellular necrosis, and blood flow through the vessels and liver is employed to compute the extent of ablated tissue given the probe location, ablation duration and biological parameters. The model was verified against an analytical solution, showing good fidelity. We also evaluated the predictive power of the proposed framework on ten patients who underwent RFA, for whom pre- and post-operative images were available. Comparisons between the computed ablation extent and ground truth, as observed in postoperative images, were promising (DICE index: 42%, sensitivity: 67%, positive predictive value: 38%). The importance of considering liver perfusion while simulating electrical-heating ablation was also highlighted. Implemented on graphics processing units (GPU), our method simulates 1 minute of ablation in 1.14 minutes, allowing near real-time computation.

12.
IEEE Trans Med Imaging ; 33(2): 318-31, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24108749

RESUMEN

As a minimally invasive surgery to treat atrial fibrillation (AF), catheter based ablation uses high radio-frequency energy to eliminate potential sources of abnormal electrical events, especially around the ostia of pulmonary veins (PV). Fusing a patient-specific left atrium (LA) model (including LA chamber, appendage, and PVs) with electro-anatomical maps or overlaying the model onto 2-D real-time fluoroscopic images provides valuable visual guidance during the intervention. In this work, we present a fully automatic LA segmentation system on nongated C-arm computed tomography (C-arm CT) data, where thin boundaries between the LA and surrounding tissues are often blurred due to the cardiac motion artifacts. To avoid segmentation leakage, the shape prior should be exploited to guide the segmentation. A single holistic shape model is often not accurate enough to represent the whole LA shape population under anatomical variations, e.g., the left common PVs vs. separate left PVs. Instead, a part based LA model is proposed, which includes the chamber, appendage, four major PVs, and right middle PVs. Each part is a much simpler anatomical structure compared to the holistic one and can be segmented using a model-based approach (except the right middle PVs). After segmenting the LA parts, the gaps and overlaps among the parts are resolved and segmentation of the ostia region is further refined. As a common anatomical variation, some patients may contain extra right middle PVs, which are segmented using a graph cuts algorithm under the constraints from the already extracted major right PVs. Our approach is computationally efficient, taking about 2.6 s to process a volume with 256 × 256 × 245 voxels. Experiments on 687 C-arm CT datasets demonstrate its robustness and state-of-the-art segmentation accuracy.


Asunto(s)
Fibrilación Atrial , Atrios Cardíacos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Fibrilación Atrial/diagnóstico por imagen , Fibrilación Atrial/cirugía , Fluoroscopía/métodos , Atrios Cardíacos/diagnóstico por imagen , Atrios Cardíacos/cirugía , Humanos , Modelos Cardiovasculares , Venas Pulmonares/diagnóstico por imagen
13.
Med Image Anal ; 17(2): 254-70, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23246185

RESUMEN

Lymph nodes have high clinical relevance and routinely need to be considered in clinical practice. Automatic detection is, however, challenging due to clutter and low contrast. In this paper, a method is presented that fully automatically detects and segments lymph nodes in 3-D computed tomography images of the chest. Lymph nodes can easily be confused with other structures, it is therefore vital to incorporate as much anatomical prior knowledge as possible in order to achieve a good detection performance. Here, a learned prior of the spatial distribution is used to model this knowledge. Different prior types with increasing complexity are proposed and compared to each other. This is combined with a powerful discriminative model that detects lymph nodes from their appearance. It first generates a number of candidates of possible lymph node center positions. Then, a segmentation method is initialized with a detected candidate. The graph cuts method is adapted to the problem of lymph nodes segmentation. We propose a setting that requires only a single positive seed and at the same time solves the small cut problem of graph cuts. Furthermore, we propose a feature set that is extracted from the segmentation. A classifier is trained on this feature set and used to reject false alarms. Cross-validation on 54 CT datasets showed that for a fixed number of four false alarms per volume image, the detection rate is well more than doubled when using the spatial prior. In total, our proposed method detects mediastinal lymph nodes with a true positive rate of 52.0% at the cost of only 3.1 false alarms per volume image and a true positive rate of 60.9% with 6.1 false alarms per volume image, which compares favorably to prior work on mediastinal lymph node detection.


Asunto(s)
Inteligencia Artificial , Ganglios Linfáticos/diagnóstico por imagen , Metástasis Linfática/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Análisis Discriminante , Humanos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 323-30, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24505777

RESUMEN

Radio-frequency ablation (RFA), the most widely used minimally invasive ablative therapy of liver cancer, is challenged by a lack of patient-specific planning. In particular, the presence of blood vessels and time-varying thermal diffusivity makes the prediction of the extent of the ablated tissue difficult. This may result in incomplete treatments and increased risk of recurrence. We propose a new model of the physical mechanisms involved in RFA of abdominal tumors based on Lattice Boltzmann Method to predict the extent of ablation given the probe location and the biological parameters. Our method relies on patient images, from which level set representations of liver geometry, tumor shape and vessels are extracted. Then a computational model of heat diffusion, cellular necrosis and blood flow through vessels and liver is solved to estimate the extent of ablated tissue. After quantitative verifications against an analytical solution, we apply our framework to 5 patients datasets which include pre- and post-operative CT images, yielding promising correlation between predicted and actual ablation extent (mean point to mesh errors of 8.7 mm). Implemented on graphics processing units, our method may enable RFA planning in clinical settings as it leads to near real-time computation: 1 minute of ablation is simulated in 1.14 minutes, which is almost 60x faster than standard finite element method.


Asunto(s)
Ablación por Catéter/métodos , Neoplasias Hepáticas/fisiopatología , Neoplasias Hepáticas/cirugía , Modelos Biológicos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Simulación por Computador , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Atención Dirigida al Paciente/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Resultado del Tratamiento
15.
Med Image Comput Comput Assist Interv ; 16(Pt 2): 395-402, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24579165

RESUMEN

Transcatheter aortic valve implantation (TAVI) is becoming the standard choice of care for non-operable patients suffering from severe aortic valve stenosis. As there is no direct view or access to the affected anatomy, accurate preoperative planning is crucial for a successful outcome. The most important decision during planning is selecting the proper implant type and size. Due to the wide variety in device sizes and types and non-circular annulus shapes, there is often no obvious choice for the specific patient. Most clinicians base their final decision on their previous experience. As a first step towards a more predictive planning, we propose an integrated method to estimate the aortic apparatus from CT images and compute implant deployment. Aortic anatomy, which includes aortic root, leaflets and calcifications, is automatically extracted using robust modeling and machine learning algorithms. Then, the finite element method is employed to calculate the deployment of a TAVI implant inside the patient-specific aortic anatomy. The anatomical model was evaluated on 198 CT images, yielding an accuracy of 1.30 +/- 0.23 mm. In eleven subjects, pre- and post-TAVI CT images were available. Errors in predicted implant deployment were of 1.74 +/- 0.40 mm in average and 1.32 mm in the aortic valve annulus region, which is almost three times lower than the average gap of 3 mm between consecutive implant sizes. Our framework may thus constitute a surrogate tool for TAVI planning.


Asunto(s)
Estenosis de la Válvula Aórtica/diagnóstico por imagen , Estenosis de la Válvula Aórtica/cirugía , Implantación de Prótesis de Válvulas Cardíacas/métodos , Modelos Cardiovasculares , Cuidados Preoperatorios/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Cirugía Asistida por Computador/métodos , Simulación por Computador , Humanos , Implantación de Prótesis/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
Med Image Anal ; 17(8): 1283-92, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23265800

RESUMEN

Examinations of the spinal column with both, Magnetic Resonance (MR) imaging and Computed Tomography (CT), often require a precise three-dimensional positioning, angulation and labeling of the spinal disks and the vertebrae. A fully automatic and robust approach is a prerequisite for an automated scan alignment as well as for the segmentation and analysis of spinal disks and vertebral bodies in Computer Aided Diagnosis (CAD) applications. In this article, we present a novel method that combines Marginal Space Learning (MSL), a recently introduced concept for efficient discriminative object detection, with a generative anatomical network that incorporates relative pose information for the detection of multiple objects. It is used to simultaneously detect and label the spinal disks. While a novel iterative version of MSL is used to quickly generate candidate detections comprising position, orientation, and scale of the disks with high sensitivity, the anatomical network selects the most likely candidates using a learned prior on the individual nine dimensional transformation spaces. Finally, we propose an optional case-adaptive segmentation approach that allows to segment the spinal disks and vertebrae in MR and CT respectively. Since the proposed approaches are learning-based, they can be trained for MR or CT alike. Experimental results based on 42 MR and 30 CT volumes show that our system not only achieves superior accuracy but also is among the fastest systems of its kind in the literature. On the MR data set the spinal disks of a whole spine are detected in 11.5s on average with 98.6% sensitivity and 0.073 false positive detections per volume. On the CT data a comparable sensitivity of 98.0% with 0.267 false positives is achieved. Detected disks are localized with an average position error of 2.4 mm/3.2 mm and angular error of 3.9°/4.5° in MR/CT, which is close to the employed hypothesis resolution of 2.1 mm and 3.3°.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Columna Vertebral/diagnóstico , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/patología , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Aumento de la Imagen/métodos , Análisis de Regresión , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
IEEE Trans Med Imaging ; 31(12): 2307-21, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22955891

RESUMEN

Transcatheter aortic valve implantation (TAVI) is a minimally invasive procedure to treat severe aortic valve stenosis. As an emerging imaging technique, C-arm computed tomography (CT) plays a more and more important role in TAVI on both pre-operative surgical planning (e.g., providing 3-D valve measurements) and intra-operative guidance (e.g., determining a proper C-arm angulation). Automatic aorta segmentation and aortic valve landmark detection in a C-arm CT volume facilitate the seamless integration of C-arm CT into the TAVI workflow and improve the patient care. In this paper, we present a part-based aorta segmentation approach, which can handle structural variation of the aorta in case that the aortic arch and descending aorta are missing in the volume. The whole aorta model is split into four parts: aortic root, ascending aorta, aortic arch, and descending aorta. Discriminative learning is applied to train a detector for each part separately to exploit the rich domain knowledge embedded in an expert-annotated dataset. Eight important aortic valve landmarks (three hinges, three commissures, and two coronary ostia) are also detected automatically with an efficient hierarchical approach. Our approach is robust under all kinds of variations observed in a real clinical setting, including changes in the field-of-view, contrast agent injection, scan timing, and aortic valve regurgitation. Taking about 1.1 s to process a volume, it is also computationally efficient. Under the guidance of the automatically extracted patient-specific aorta model, the physicians can properly determine the C-arm angulation and deploy the prosthetic valve. Promising outcomes have been achieved in real clinical applications.


Asunto(s)
Válvula Aórtica/diagnóstico por imagen , Válvula Aórtica/cirugía , Aortografía/métodos , Implantación de Prótesis de Válvulas Cardíacas/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados , Cirugía Asistida por Computador/métodos
18.
Med Image Anal ; 16(7): 1330-46, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-22766456

RESUMEN

Treatment of mitral valve (MV) diseases requires comprehensive clinical evaluation and therapy personalization to optimize outcomes. Finite-element models (FEMs) of MV physiology have been proposed to study the biomechanical impact of MV repair, but their translation into the clinics remains challenging. As a step towards this goal, we present an integrated framework for finite-element modeling of the MV closure based on patient-specific anatomies and boundary conditions. Starting from temporal medical images, we estimate a comprehensive model of the MV apparatus dynamics, including papillary tips, using a machine-learning approach. A detailed model of the open MV at end-diastole is then computed, which is finally closed according to a FEM of MV biomechanics. The motion of the mitral annulus and papillary tips are constrained from the image data for increased accuracy. A sensitivity analysis of our system shows that chordae rest length and boundary conditions have a significant influence upon the simulation results. We quantitatively test the generalization of our framework on 25 consecutive patients. Comparisons between the simulated closed valve and ground truth show encouraging results (average point-to-mesh distance: 1.49 ± 0.62 mm) but also the need for personalization of tissue properties, as illustrated in three patients. Finally, the predictive power of our model is tested on one patient who underwent MitralClip by comparing the simulated intervention with the real outcome in terms of MV closure, yielding promising prediction. By providing an integrated way to perform MV simulation, our framework may constitute a surrogate tool for model validation and therapy planning.


Asunto(s)
Anuloplastia de la Válvula Mitral/instrumentación , Insuficiencia de la Válvula Mitral/fisiopatología , Insuficiencia de la Válvula Mitral/cirugía , Válvula Mitral/fisiopatología , Válvula Mitral/cirugía , Modelos Cardiovasculares , Instrumentos Quirúrgicos , Catéteres Cardíacos , Simulación por Computador , Análisis de Falla de Equipo , Análisis de Elementos Finitos , Humanos , Insuficiencia de la Válvula Mitral/diagnóstico , Diseño de Prótesis , Ajuste de Prótesis , Cirugía Asistida por Computador/instrumentación , Cirugía Asistida por Computador/métodos , Integración de Sistemas , Resultado del Tratamiento
19.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 405-13, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23285577

RESUMEN

Detailed visualization of stents during their positioning and deployment is critical for the success of an interventional procedure. This paper presents a novel method that relies on balloon markers to enable real-time enhanced visualization and assessment of the stent positioning and expansion, together with the blood flow over the lesion area. The key novelty is an automatic tracking framework that includes a self-initialization phase based on the Viterbi algorithm and an online tracking phase implementing the Bayesian fusion of multiple cues. The resulting motion compensation stabilizes the image of the stent and by compounding multiple frames we obtain a much better stent contrast. Robust results are obtained from more than 350 clinical data sets.


Asunto(s)
Cateterismo/métodos , Intervención Coronaria Percutánea/métodos , Stents , Algoritmos , Teorema de Bayes , Fluoroscopía/métodos , Humanos , Modelos Estadísticos , Movimiento (Física) , Probabilidad , Reproducibilidad de los Resultados , Programas Informáticos , Cirugía Asistida por Computador , Factores de Tiempo
20.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 438-46, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23285581

RESUMEN

Digital breast tomosynthesis (DBT) emerges as a new 3D modality for breast cancer screening and diagnosis. Like in conventional 2D mammography the breast is scanned in a compressed state. For orientation during surgical planning, e.g., during presurgical ultrasound-guided anchor-wire marking, as well as for improving communication between radiologists and surgeons it is desirable to estimate an uncompressed model of the acquired breast along with a spatial mapping that allows localizing lesions marked in DBT in the uncompressed model. We therefore propose a method for 3D breast decompression and associated lesion mapping from 3D DBT data. The method is entirely data-driven and employs machine learning methods to predict the shape of the uncompressed breast from a DBT input volume. For this purpose a shape space has been constructed from manually annotated uncompressed breast surfaces and shape parameters are predicted by multiple multi-variate Random Forest regression. By exploiting point correspondences between the compressed and uncompressed breasts, lesions identified in DBT can be mapped to approximately corresponding locations in the uncompressed breast model. To this end, a thin-plate spline mapping is employed. Our method features a novel completely data-driven approach to breast shape prediction that does not necessitate prior knowledge about biomechanical properties and parameters of the breast tissue. Instead, a particular deformation behavior (decompression) is learned from annotated shape pairs, compressed and uncompressed, which are obtained from DBT and magnetic resonance image volumes, respectively. On average, shape prediction takes 26s and achieves a surface distance of 15.80 +/- 4.70 mm. The mean localization error for lesion mapping is 22.48 +/- 8.67 mm.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Mama/patología , Imagenología Tridimensional/métodos , Algoritmos , Inteligencia Artificial , Fenómenos Biomecánicos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Mamografía/métodos , Modelos Estadísticos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados , Tomografía por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA