Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Neuroimage ; 244: 118589, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34563682

RESUMEN

MRI plays a crucial role in multiple sclerosis diagnostic and patient follow-up. In particular, the delineation of T2-FLAIR hyperintense lesions is crucial although mostly performed manually - a tedious task. Many methods have thus been proposed to automate this task. However, sufficiently large datasets with a thorough expert manual segmentation are still lacking to evaluate these methods. We present a unique dataset for MS lesions segmentation evaluation. It consists of 53 patients acquired on 4 different scanners with a harmonized protocol. Hyperintense lesions on FLAIR were manually delineated on each patient by 7 experts with control on T2 sequence, and gathered in a consensus segmentation for evaluation. We provide raw and preprocessed data and a split of the dataset into training and testing data, the latter including data from a scanner not present in the training dataset. We strongly believe that this dataset will become a reference in MS lesions segmentation evaluation, allowing to evaluate many aspects: evaluation of performance on unseen scanner, comparison to individual experts performance, comparison to other challengers who already used this dataset, etc.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Esclerosis Múltiple/diagnóstico por imagen , Adulto , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
2.
Magn Reson Med ; 78(5): 1981-1990, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28019027

RESUMEN

PURPOSE: The robustness of a recently introduced globally convergent deconvolution algorithm with temporal and edge-preserving spatial regularization for the deconvolution of dynamic susceptibility contrast perfusion magnetic resonance imaging is assessed in the context of ischemic stroke. THEORY AND METHODS: Ischemic tissues are not randomly distributed in the brain but form a spatially organized entity. The addition of a spatial regularization term allows to take into account this spatial organization contrarily to the sole temporal regularization approach which processes each voxel independently. The robustness of the spatial regularization in relation to shape variability, hemodynamic variability in tissues, noise in the magnetic resonance imaging apparatus, and uncertainty on the arterial input function selected for the deconvolution is addressed via an original in silico validation approach. RESULTS: The deconvolution algorithm proved robust to the different sources of variability, outperforming temporal Tikhonov regularization in most realistic conditions considered. The limiting factor is the proper estimation of the arterial input function. CONCLUSION: This study quantified the robustness of a spatio-temporal approach for dynamic susceptibility contrast-magnetic resonance imaging deconvolution via a new simulator. This simulator, now accessible online, is of wide applicability for the validation of any deconvolution algorithm. Magn Reson Med 78:1981-1990, 2017. © 2016 International Society for Magnetic Resonance in Medicine.


Asunto(s)
Algoritmos , Isquemia Encefálica/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Accidente Cerebrovascular/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Simulación por Computador , Medios de Contraste , Humanos , Imagen de Perfusión , Fantasmas de Imagen
3.
J Biomed Inform ; 52: 279-92, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25038553

RESUMEN

This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository.


Asunto(s)
Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Internet , Semántica , Vocabulario Controlado , Encéfalo/patología , Simulación por Computador , Humanos , Modelos Teóricos , Programas Informáticos
4.
J Imaging Inform Med ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38689149

RESUMEN

Precision medicine research benefits from machine learning in the creation of robust models adapted to the processing of patient data. This applies both to pathology identification in images, i.e., annotation or segmentation, and to computer-aided diagnostic for classification or prediction. It comes with the strong need to exploit and visualize large volumes of images and associated medical data. The work carried out in this paper follows on from a main case study piloted in a cancer center. It proposes an analysis pipeline for patients with osteosarcoma through segmentation, feature extraction and application of a deep learning model to predict response to treatment. The main aim of the AWESOMME project is to leverage this work and implement the pipeline on an easy-to-access, secure web platform. The proposed WEB application is based on a three-component architecture: a data server, a heavy computation and authentication server and a medical imaging web-framework with a user interface. These existing components have been enhanced to meet the needs of security and traceability for the continuous production of expert data. It innovates by covering all steps of medical imaging processing (visualization and segmentation, feature extraction and aided diagnostic) and enables the test and use of machine learning models. The infrastructure is operational, deployed in internal production and is currently being installed in the hospital environment. The extension of the case study and user feedback enabled us to fine-tune functionalities and proved that AWESOMME is a modular solution capable to analyze medical data and share research algorithms with in-house clinicians.

5.
Artículo en Inglés | MEDLINE | ID: mdl-32746187

RESUMEN

Segmentation of cardiac structures is one of the fundamental steps to estimate volumetric indices of the heart. This step is still performed semiautomatically in clinical routine and is, thus, prone to interobserver and intraobserver variabilities. Recent studies have shown that deep learning has the potential to perform fully automatic segmentation. However, the current best solutions still suffer from a lack of robustness in terms of accuracy and number of outliers. The goal of this work is to introduce a novel network designed to improve the overall segmentation accuracy of left ventricular structures (endocardial and epicardial borders) while enhancing the estimation of the corresponding clinical indices and reducing the number of outliers. This network is based on a multistage framework where both the localization and segmentation steps are optimized jointly through an end-to-end scheme. Results obtained on a large open access data set show that our method outperforms the current best-performing deep learning solution with a lighter architecture and achieved an overall segmentation accuracy lower than the intraobserver variability for the epicardial border (i.e., on average a mean absolute error of 1.5 mm and a Hausdorff distance of 5.1mm) with 11% of outliers. Moreover, we demonstrate that our method can closely reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.96 and a mean absolute error of 7.6 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.83 and an absolute mean error of 5.0%, producing scores that are slightly below the intraobserver margin. Based on this observation, areas for improvement are suggested.


Asunto(s)
Aprendizaje Profundo , Ecocardiografía/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
6.
IEEE Trans Med Imaging ; 38(9): 2198-2210, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-30802851

RESUMEN

Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.


Asunto(s)
Aprendizaje Profundo , Ecocardiografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Bases de Datos Factuales , Corazón/diagnóstico por imagen , Humanos
8.
Med Image Anal ; 44: 177-195, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29268169

RESUMEN

INTRODUCTION: Automatic functional volume segmentation in PET images is a challenge that has been addressed using a large array of methods. A major limitation for the field has been the lack of a benchmark dataset that would allow direct comparison of the results in the various publications. In the present work, we describe a comparison of recent methods on a large dataset following recommendations by the American Association of Physicists in Medicine (AAPM) task group (TG) 211, which was carried out within a MICCAI (Medical Image Computing and Computer Assisted Intervention) challenge. MATERIALS AND METHODS: Organization and funding was provided by France Life Imaging (FLI). A dataset of 176 images combining simulated, phantom and clinical images was assembled. A website allowed the participants to register and download training data (n = 19). Challengers then submitted encapsulated pipelines on an online platform that autonomously ran the algorithms on the testing data (n = 157) and evaluated the results. The methods were ranked according to the arithmetic mean of sensitivity and positive predictive value. RESULTS: Sixteen teams registered but only four provided manuscripts and pipeline(s) for a total of 10 methods. In addition, results using two thresholds and the Fuzzy Locally Adaptive Bayesian (FLAB) were generated. All competing methods except one performed with median accuracy above 0.8. The method with the highest score was the convolutional neural network-based segmentation, which significantly outperformed 9 out of 12 of the other methods, but not the improved K-Means, Gaussian Model Mixture and Fuzzy C-Means methods. CONCLUSION: The most rigorous comparative study of PET segmentation algorithms to date was carried out using a dataset that is the largest used in such studies so far. The hierarchy amongst the methods in terms of accuracy did not depend strongly on the subset of datasets or the metrics (or combination of metrics). All the methods submitted by the challengers except one demonstrated good performance with median accuracy scores above 0.8.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico por imagen , Tomografía de Emisión de Positrones/métodos , Teorema de Bayes , Lógica Difusa , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Fantasmas de Imagen , Valor Predictivo de las Pruebas , Sensibilidad y Especificidad
9.
Gigascience ; 7(5)2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29718199

RESUMEN

We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.


Asunto(s)
Biología Computacional/métodos , Programas Informáticos , Encéfalo/diagnóstico por imagen , Humanos , Neuroimagen , Reproducibilidad de los Resultados
10.
Sci Rep ; 8(1): 13650, 2018 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-30209345

RESUMEN

We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética/métodos , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/diagnóstico , Tejido Parenquimatoso/diagnóstico por imagen , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Masculino , Esclerosis Múltiple/patología , Redes Neurales de la Computación , Tejido Parenquimatoso/patología , Estudios Retrospectivos
11.
IEEE Trans Med Imaging ; 35(4): 967-77, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26625409

RESUMEN

Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from three experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions.


Asunto(s)
Algoritmos , Ecocardiografía Tridimensional/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
12.
IEEE Trans Med Imaging ; 32(1): 110-8, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23014715

RESUMEN

This paper presents the Virtual Imaging Platform (VIP), a platform accessible at http://vip.creatis.insa-lyon.fr to facilitate the sharing of object models and medical image simulators, and to provide access to distributed computing and storage resources. A complete overview is presented, describing the ontologies designed to share models in a common repository, the workflow template used to integrate simulators, and the tools and strategies used to exploit computing and storage resources. Simulation results obtained in four image modalities and with different models show that VIP is versatile and robust enough to support large simulations. The platform currently has 200 registered users who consumed 33 years of CPU time in 2011.


Asunto(s)
Sistemas de Administración de Bases de Datos , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos , Simulación por Computador , Bases de Datos Factuales , Humanos , Aplicaciones de la Informática Médica , Modelos Biológicos , Reproducibilidad de los Resultados
13.
Philos Trans A Math Phys Eng Sci ; 368(1925): 3925-36, 2010 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-20643685

RESUMEN

The Virtual Physiological Human (VPH) is a major European e-Science initiative intended to support the development of patient-specific computer models and their application in personalized and predictive healthcare. The VPH Network of Excellence (VPH-NoE) project is tasked with facilitating interaction between the various VPH projects and addressing issues of common concern. A key deliverable is the 'VPH ToolKit'--a collection of tools, methodologies and services to support and enable VPH research, integrating and extending existing work across Europe towards greater interoperability and sustainability. Owing to the diverse nature of the field, a single monolithic 'toolkit' is incapable of addressing the needs of the VPH. Rather, the VPH ToolKit should be considered more as a 'toolbox' of relevant technologies, interacting around a common set of standards. The latter apply to the information used by tools, including any data and the VPH models themselves, and also to the naming and categorizing of entities and concepts involved. Furthermore, the technologies and methodologies available need to be widely disseminated, and relevant tools and services easily found by researchers. The VPH-NoE has thus created an online resource for the VPH community to meet this need. It consists of a database of tools, methods and services for VPH research, with a Web front-end. This has facilities for searching the database, for adding or updating entries, and for providing user feedback on entries. Anyone is welcome to contribute.


Asunto(s)
Internet , Fisiología/métodos , Interfaz Usuario-Computador , Bases de Datos Factuales , Predicción , Humanos , Modelos Biológicos , Investigación/tendencias
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA