Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Eur Radiol ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38861161

RESUMO

PURPOSE: This work aims to assess standard evaluation practices used by the research community for evaluating medical imaging classifiers, with a specific focus on the implications of class imbalance. The analysis is performed on chest X-rays as a case study and encompasses a comprehensive model performance definition, considering both discriminative capabilities and model calibration. MATERIALS AND METHODS: We conduct a concise literature review to examine prevailing scientific practices used when evaluating X-ray classifiers. Then, we perform a systematic experiment on two major chest X-ray datasets to showcase a didactic example of the behavior of several performance metrics under different class ratios and highlight how widely adopted metrics can conceal performance in the minority class. RESULTS: Our literature study confirms that: (1) even when dealing with highly imbalanced datasets, the community tends to use metrics that are dominated by the majority class; and (2) it is still uncommon to include calibration studies for chest X-ray classifiers, albeit its importance in the context of healthcare. Moreover, our systematic experiments confirm that current evaluation practices may not reflect model performance in real clinical scenarios and suggest complementary metrics to better reflect the performance of the system in such scenarios. CONCLUSION: Our analysis underscores the need for enhanced evaluation practices, particularly in the context of class-imbalanced chest X-ray classifiers. We recommend the inclusion of complementary metrics such as the area under the precision-recall curve (AUC-PR), adjusted AUC-PR, and balanced Brier score, to offer a more accurate depiction of system performance in real clinical scenarios, considering metrics that reflect both, discrimination and calibration performance. CLINICAL RELEVANCE STATEMENT: This study underscores the critical need for refined evaluation metrics in medical imaging classifiers, emphasizing that prevalent metrics may mask poor performance in minority classes, potentially impacting clinical diagnoses and healthcare outcomes. KEY POINTS: Common scientific practices in papers dealing with X-ray computer-assisted diagnosis (CAD) systems may be misleading. We highlight limitations in reporting of evaluation metrics for X-ray CAD systems in highly imbalanced scenarios. We propose adopting alternative metrics based on experimental evaluation on large-scale datasets.

2.
Eur Radiol ; 34(3): 2024-2035, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37650967

RESUMO

OBJECTIVES: Evaluate the performance of a deep learning (DL)-based model for multiple sclerosis (MS) lesion segmentation and compare it to other DL and non-DL algorithms. METHODS: This ambispective, multicenter study assessed the performance of a DL-based model for MS lesion segmentation and compared it to alternative DL- and non-DL-based methods. Models were tested on internal (n = 20) and external (n = 18) datasets from Latin America, and on an external dataset from Europe (n = 49). We also examined robustness by rescanning six patients (n = 6) from our MS clinical cohort. Moreover, we studied inter-human annotator agreement and discussed our findings in light of these results. Performance and robustness were assessed using intraclass correlation coefficient (ICC), Dice coefficient (DC), and coefficient of variation (CV). RESULTS: Inter-human ICC ranged from 0.89 to 0.95, while spatial agreement among annotators showed a median DC of 0.63. Using expert manual segmentations as ground truth, our DL model achieved a median DC of 0.73 on the internal, 0.66 on the external, and 0.70 on the challenge datasets. The performance of our DL model exceeded that of the alternative algorithms on all datasets. In the robustness experiment, our DL model also achieved higher DC (ranging from 0.82 to 0.90) and lower CV (ranging from 0.7 to 7.9%) when compared to the alternative methods. CONCLUSION: Our DL-based model outperformed alternative methods for brain MS lesion segmentation. The model also proved to generalize well on unseen data and has a robust performance and low processing times both on real-world and challenge-based data. CLINICAL RELEVANCE STATEMENT: Our DL-based model demonstrated superior performance in accurately segmenting brain MS lesions compared to alternative methods, indicating its potential for clinical application with improved accuracy, robustness, and efficiency. KEY POINTS: • Automated lesion load quantification in MS patients is valuable; however, more accurate methods are still necessary. • A novel deep learning model outperformed alternative MS lesion segmentation methods on multisite datasets. • Deep learning models are particularly suitable for MS lesion segmentation in clinical scenarios.


Assuntos
Imageamento por Ressonância Magnética , Esclerose Múltipla , Humanos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/patologia , Redes Neurais de Computação , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia
3.
Epilepsia ; 64(8): 2056-2069, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37243362

RESUMO

OBJECTIVE: Managing the progress of drug-resistant epilepsy patients implanted with the Responsive Neurostimulation (RNS) System requires the manual evaluation of hundreds of hours of intracranial recordings. The generation of these large amounts of data and the scarcity of experts' time for evaluation necessitate the development of automatic tools to detect intracranial electroencephalographic (iEEG) seizure patterns (iESPs) with expert-level accuracy. We developed an intelligent system for identifying the presence and onset time of iESPs in iEEG recordings from the RNS device. METHODS: An iEEG dataset from 24 patients (36 293 recordings) recorded by the RNS System was used for training and evaluating a neural network model (iESPnet). The model was trained to identify the probability of seizure onset at each sample point of the iEEG. The reliability of the net was assessed and compared to baseline methods, including detections made by the device. iESPnet performance was measured using balanced accuracy and the F1 score for iESP detection. The prediction time was assessed via both the error and the mean absolute error. The model was evaluated following a hold-one-out strategy, and then validated in a separate cohort of 26 patients from a different medical center. RESULTS: iESPnet detected the presence of an iESP with a mean accuracy value of 90% and an onset time prediction error of approximately 3.4 s. There was no relationship between electrode location and prediction outcome. Model outputs were well calibrated and unbiased by the RNS detections. Validation on a separate cohort further supported iESPnet applicability in real clinical scenarios. Importantly, RNS device detections were found to be less accurate and delayed in nonresponders; therefore, tools to improve the accuracy of seizure detection are critical for increasing therapeutic efficacy. SIGNIFICANCE: iESPnet is a reliable and accurate tool with the potential to alleviate the time-consuming manual inspection of iESPs and facilitate the evaluation of therapeutic response in RNS-implanted patients.


Assuntos
Epilepsia Resistente a Medicamentos , Convulsões , Humanos , Reprodutibilidade dos Testes , Convulsões/diagnóstico , Convulsões/terapia , Epilepsia Resistente a Medicamentos/diagnóstico , Epilepsia Resistente a Medicamentos/terapia , Eletrocorticografia
4.
Proc Natl Acad Sci U S A ; 117(23): 12592-12594, 2020 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-32457147

RESUMO

Artificial intelligence (AI) systems for computer-aided diagnosis and image-based screening are being adopted worldwide by medical institutions. In such a context, generating fair and unbiased classifiers becomes of paramount importance. The research community of medical image computing is making great efforts in developing more accurate algorithms to assist medical doctors in the difficult task of disease diagnosis. However, little attention is paid to the way databases are collected and how this may influence the performance of AI systems. Our study sheds light on the importance of gender balance in medical imaging datasets used to train AI systems for computer-assisted diagnosis. We provide empirical evidence supported by a large-scale study, based on three deep neural network architectures and two well-known publicly available X-ray image datasets used to diagnose various thoracic diseases under different gender imbalance conditions. We found a consistent decrease in performance for underrepresented genders when a minimum balance is not fulfilled. This raises the alarm for national agencies in charge of regulating and approving computer-assisted diagnosis systems, which should include explicit gender balance and diversity recommendations. We also establish an open problem for the academic medical image computing community which needs to be addressed by novel algorithms endowed with robustness to gender imbalance.


Assuntos
Conjuntos de Dados como Assunto/normas , Aprendizado Profundo/normas , Interpretação de Imagem Radiográfica Assistida por Computador/normas , Radiografia Torácica/normas , Viés , Feminino , Humanos , Masculino , Padrões de Referência , Fatores Sexuais
5.
Neuroimage ; 169: 431-442, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29278772

RESUMO

Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Conectoma/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Teóricos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Transtorno do Espectro Autista/diagnóstico por imagem , Bases de Dados Factuais , Conjuntos de Dados como Assunto , Humanos , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiopatologia
6.
Sci Data ; 11(1): 511, 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38760409

RESUMO

The development of successful artificial intelligence models for chest X-ray analysis relies on large, diverse datasets with high-quality annotations. While several databases of chest X-ray images have been released, most include disease diagnosis labels but lack detailed pixel-level anatomical segmentation labels. To address this gap, we introduce an extensive chest X-ray multi-center segmentation dataset with uniform and fine-grain anatomical annotations for images coming from five well-known publicly available databases: ChestX-ray8, CheXpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR, resulting in 657,566 segmentation masks. Our methodology utilizes the HybridGNet model to ensure consistent and high-quality segmentations across all datasets. Rigorous validation, including expert physician evaluation and automatic quality control, was conducted to validate the resulting masks. Additionally, we provide individualized quality indices per mask and an overall quality estimation per dataset. This dataset serves as a valuable resource for the broader scientific community, streamlining the development and assessment of innovative methodologies in chest X-ray analysis.


Assuntos
Radiografia Torácica , Humanos , Bases de Dados Factuais , Inteligência Artificial , Pulmão/diagnóstico por imagem
7.
Nat Mach Intell ; 6(3): 291-306, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38523678

RESUMO

Recent genome-wide association studies have successfully identified associations between genetic variants and simple cardiac morphological parameters derived from cardiac magnetic resonance images. However, the emergence of large databases, including genetic data linked to cardiac magnetic resonance facilitates the investigation of more nuanced patterns of cardiac shape variability than those studied so far. Here we propose a framework for gene discovery coined unsupervised phenotype ensembles. The unsupervised phenotype ensemble builds a redundant yet highly expressive representation by pooling a set of phenotypes learnt in an unsupervised manner, using deep learning models trained with different hyperparameters. These phenotypes are then analysed via genome-wide association studies, retaining only highly confident and stable associations across the ensemble. We applied our approach to the UK Biobank database to extract geometric features of the left ventricle from image-derived three-dimensional meshes. We demonstrate that our approach greatly improves the discoverability of genes that influence left ventricle shape, identifying 49 loci with study-wide significance and 25 with suggestive significance. We argue that our approach would enable more extensive discovery of gene associations with image-derived phenotypes for other organs or image modalities.

8.
IEEE Trans Med Imaging ; 42(2): 546-556, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36423313

RESUMO

Anatomical segmentation is a fundamental task in medical image computing, generally tackled with fully convolutional neural networks which produce dense segmentation masks. These models are often trained with loss functions such as cross-entropy or Dice, which assume pixels to be independent of each other, thus ignoring topological errors and anatomical inconsistencies. We address this limitation by moving from pixel-level to graph representations, which allow to naturally incorporate anatomical constraints by construction. To this end, we introduce HybridGNet, an encoder-decoder neural architecture that leverages standard convolutions for image feature encoding and graph convolutional neural networks (GCNNs) to decode plausible representations of anatomical structures. We also propose a novel image-to-graph skip connection layer which allows localized features to flow from standard convolutional blocks to GCNN blocks, and show that it improves segmentation accuracy. The proposed architecture is extensively evaluated in a variety of domain shift and image occlusion scenarios, and audited considering different types of demographic domain shift. Our comprehensive experimental setup compares HybridGNet with other landmark and pixel-based models for anatomical segmentation in chest x-ray images, and shows that it produces anatomically plausible results in challenging scenarios where other models tend to fail.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Raios X , Processamento de Imagem Assistida por Computador/métodos , Radiografia , Tórax/diagnóstico por imagem
9.
Artigo em Inglês | MEDLINE | ID: mdl-37505997

RESUMO

Learning-based image reconstruction models, such as those based on the U-Net, require a large set of labeled images if good generalization is to be guaranteed. In some imaging domains, however, labeled data with pixel- or voxel-level label accuracy are scarce due to the cost of acquiring them. This problem is exacerbated further in domains like medical imaging, where there is no single ground truth label, resulting in large amounts of repeat variability in the labels. Therefore, training reconstruction networks to generalize better by learning from both labeled and unlabeled examples (called semi-supervised learning) is problem of practical and theoretical interest. However, traditional semi-supervised learning methods for image reconstruction often necessitate handcrafting a differentiable regularizer specific to some given imaging problem, which can be extremely time-consuming. In this work, we propose "supervision by denoising" (SUD), a framework to supervise reconstruction models using their own denoised output as labels. SUD unifies stochastic averaging and spatial denoising techniques under a spatio-temporal denoising framework and alternates denoising and model weight update steps in an optimization framework for semi-supervision. As example applications, we apply SUD to two problems from biomedical imaging-anatomical brain reconstruction (3D) and cortical parcellation (2D)-to demonstrate a significant improvement in reconstruction over supervised-only and ensembling baselines. Our code available at https://github.com/seannz/sud.

10.
Netw Neurosci ; 6(1): 196-212, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36605888

RESUMO

Theories for autism spectrum disorder (ASD) have been formulated at different levels, ranging from physiological observations to perceptual and behavioral descriptions. Understanding the physiological underpinnings of perceptual traits in ASD remains a significant challenge in the field. Here we show how a recurrent neural circuit model that was optimized to perform sampling-based inference and displays characteristic features of cortical dynamics can help bridge this gap. The model was able to establish a mechanistic link between two descriptive levels for ASD: a physiological level, in terms of inhibitory dysfunction, neural variability, and oscillations, and a perceptual level, in terms of hypopriors in Bayesian computations. We took two parallel paths-inducing hypopriors in the probabilistic model, and an inhibitory dysfunction in the network model-which lead to consistent results in terms of the represented posteriors, providing support for the view that both descriptions might constitute two sides of the same coin.

11.
Gigascience ; 10(12)2021 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-34927190

RESUMO

Machine learning systems influence our daily lives in many different ways. Hence, it is crucial to ensure that the decisions and recommendations made by these systems are fair, equitable, and free of unintended biases. Over the past few years, the field of fairness in machine learning has grown rapidly, investigating how, when, and why these models capture, and even potentiate, biases that are deeply rooted not only in the training data but also in our society. In this Commentary, we discuss challenges and opportunities for rigorous posterior analyses of publicly available data to build fair and equitable machine learning systems, focusing on the importance of training data, model construction, and diversity in the team of developers. The thoughts presented here have grown out of the work we did, which resulted in our winning the annual Research Parasite Award that GigaSciencesponsors.


Assuntos
Parasitos , Animais , Aprendizado de Máquina
12.
Gigascience ; 10(7)2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34282452

RESUMO

BACKGROUND: Deep learning methods have outperformed previous techniques in most computer vision tasks, including image-based plant phenotyping. However, massive data collection of root traits and the development of associated artificial intelligence approaches have been hampered by the inaccessibility of the rhizosphere. Here we present ChronoRoot, a system that combines 3D-printed open-hardware with deep segmentation networks for high temporal resolution phenotyping of plant roots in agarized medium. RESULTS: We developed a novel deep learning-based root extraction method that leverages the latest advances in convolutional neural networks for image segmentation and incorporates temporal consistency into the root system architecture reconstruction process. Automatic extraction of phenotypic parameters from sequences of images allowed a comprehensive characterization of the root system growth dynamics. Furthermore, novel time-associated parameters emerged from the analysis of spectral features derived from temporal signals. CONCLUSIONS: Our work shows that the combination of machine intelligence methods and a 3D-printed device expands the possibilities of root high-throughput phenotyping for genetics and natural variation studies, as well as the screening of clock-related mutants, revealing novel root traits.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Fenótipo , Raízes de Plantas , Plantas
13.
IEEE J Biomed Health Inform ; 25(9): 3541-3553, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33684050

RESUMO

Automatic quantification of the left ventricle (LV) from cardiac magnetic resonance (CMR) images plays an important role in making the diagnosis procedure efficient, reliable, and alleviating the laborious reading work for physicians. Considerable efforts have been devoted to LV quantification using different strategies that include segmentation-based (SG) methods and the recent direct regression (DR) methods. Although both SG and DR methods have obtained great success for the task, a systematic platform to benchmark them remains absent because of differences in label information during model learning. In this paper, we conducted an unbiased evaluation and comparison of cardiac LV quantification methods that were submitted to the Left Ventricle Quantification (LVQuan) challenge, which was held in conjunction with the Statistical Atlases and Computational Modeling of the Heart (STACOM) workshop at the MICCAI 2018. The challenge was targeted at the quantification of 1) areas of LV cavity and myocardium, 2) dimensions of the LV cavity, 3) regional wall thicknesses (RWT), and 4) the cardiac phase, from mid-ventricle short-axis CMR images. First, we constructed a public quantification dataset Cardiac-DIG with ground truth labels for both the myocardium mask and these quantification targets across the entire cardiac cycle. Then, the key techniques employed by each submission were described. Next, quantitative validation of these submissions were conducted with the constructed dataset. The evaluation results revealed that both SG and DR methods can offer good LV quantification performance, even though DR methods do not require densely labeled masks for supervision. Among the 12 submissions, the DR method LDAMT offered the best performance, with a mean estimation error of 301 mm 2 for the two areas, 2.15 mm for the cavity dimensions, 2.03 mm for RWTs, and a 9.5% error rate for the cardiac phase classification. Three of the SG methods also delivered comparable performances. Finally, we discussed the advantages and disadvantages of SG and DR methods, as well as the unsolved problems in automatic cardiac quantification for clinical practice applications.


Assuntos
Ventrículos do Coração , Imagem Cinética por Ressonância Magnética , Coração , Ventrículos do Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
14.
IEEE Trans Med Imaging ; 40(9): 2329-2342, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33939608

RESUMO

The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi.


Assuntos
Próteses e Implantes , Crânio , Crânio/diagnóstico por imagem , Crânio/cirurgia
15.
Neural Netw ; 124: 269-279, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32035306

RESUMO

Deformable image registration is a fundamental problem in the field of medical image analysis. During the last years, we have witnessed the advent of deep learning-based image registration methods which achieve state-of-the-art performance, and drastically reduce the required computational time. However, little work has been done regarding how can we encourage our models to produce not only accurate, but also anatomically plausible results, which is still an open question in the field. In this work, we argue that incorporating anatomical priors in the form of global constraints into the learning process of these models, will further improve their performance and boost the realism of the warped images after registration. We learn global non-linear representations of image anatomy using segmentation masks, and employ them to constraint the registration process. The proposed AC-RegNet architecture is evaluated in the context of chest X-ray image registration using three different datasets, where the high anatomical variability makes the task extremely challenging. Our experiments show that the proposed anatomically constrained registration model produces more realistic and accurate results than state-of-the-art methods, demonstrating the potential of this approach.


Assuntos
Aprendizado Profundo , Intensificação de Imagem Radiográfica/métodos , Humanos , Intensificação de Imagem Radiográfica/normas
16.
IEEE Trans Med Imaging ; 39(12): 3813-3820, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32746125

RESUMO

We introduce Post-DAE, a post-processing method based on denoising autoencoders (DAE) to improve the anatomical plausibility of arbitrary biomedical image segmentation algorithms. Some of the most popular segmentation methods (e.g. based on convolutional neural networks or random forest classifiers) incorporate additional post-processing steps to ensure that the resulting masks fulfill expected connectivity constraints. These methods operate under the hypothesis that contiguous pixels with similar aspect should belong to the same class. Even if valid in general, this assumption does not consider more complex priors like topological restrictions or convexity, which cannot be easily incorporated into these methods. Post-DAE leverages the latest developments in manifold learning via denoising autoencoders. First, we learn a compact and non-linear embedding that represents the space of anatomically plausible segmentations. Then, given a segmentation mask obtained with an arbitrary method, we reconstruct its anatomically plausible version by projecting it onto the learnt manifold. The proposed method is trained using unpaired segmentation mask, what makes it independent of intensity information and image modality. We performed experiments in binary and multi-label segmentation of chest X-ray and cardiac magnetic resonance images. We show how erroneous and noisy segmentation masks can be improved using Post-DAE. With almost no additional computation cost, our method brings erroneous segmentations back to a feasible space.


Assuntos
Algoritmos , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética
17.
Lancet Digit Health ; 2(6): e314-e322, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-33328125

RESUMO

BACKGROUND: CT is the most common imaging modality in traumatic brain injury (TBI). However, its conventional use requires expert clinical interpretation and does not provide detailed quantitative outputs, which may have prognostic importance. We aimed to use deep learning to reliably and efficiently quantify and detect different lesion types. METHODS: Patients were recruited between Dec 9, 2014, and Dec 17, 2017, in 60 centres across Europe. We trained and validated an initial convolutional neural network (CNN) on expert manual segmentations (dataset 1). This CNN was used to automatically segment a new dataset of scans, which we then corrected manually (dataset 2). From this dataset, we used a subset of scans to train a final CNN for multiclass, voxel-wise segmentation of lesion types. The performance of this CNN was evaluated on a test subset. Performance was measured for lesion volume quantification, lesion progression, and lesion detection and lesion volume classification. For lesion detection, external validation was done on an independent set of 500 patients from India. FINDINGS: 98 scans from one centre were included in dataset 1. Dataset 2 comprised 839 scans from 38 centres: 184 scans were used in the training subset and 655 in the test subset. Compared with manual reference, CNN-derived lesion volumes showed a mean difference of 0·86 mL (95% CI -5·23 to 6·94) for intraparenchymal haemorrhage, 1·83 mL (-12·01 to 15·66) for extra-axial haemorrhage, 2·09 mL (-9·38 to 13·56) for perilesional oedema, and 0·07 mL (-1·00 to 1·13) for intraventricular haemorrhage. INTERPRETATION: We show the ability of a CNN to separately segment, quantify, and detect multiclass haemorrhagic lesions and perilesional oedema. These volumetric lesion estimates allow clinically relevant quantification of lesion burden and progression, with potential applications for personalised treatment strategies and clinical research in TBI. FUNDING: European Union 7th Framework Programme, Hannelore Kohl Stiftung, OneMind, NeuroTrauma Sciences, Integra Neurosciences, European Research Council Horizon 2020.


Assuntos
Lesões Encefálicas Traumáticas/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Encéfalo/diagnóstico por imagem , Criança , Europa (Continente) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Semântica , Adulto Jovem
18.
IEEE J Biomed Health Inform ; 23(4): 1374-1384, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30207969

RESUMO

Deformable registration has been one of the pillars of biomedical image computing. Conventional approaches refer to the definition of a similarity criterion that, once endowed with a deformation model and a smoothness constraint, determines the optimal transformation to align two given images. The definition of this metric function is among the most critical aspects of the registration process. We argue that incorporating semantic information (in the form of anatomical segmentation maps) into the registration process will further improve the accuracy of the results. In this paper, we propose a novel weakly supervised approach to learn domain-specific aggregations of conventional metrics using anatomical segmentations. This combination is learned using latent structured support vector machines. The learned matching criterion is integrated within a metric-free optimization framework based on graphical models, resulting in a multi-metric algorithm endowed with a spatially varying similarity metric function conditioned on the anatomical structures. We provide extensive evaluation on three different datasets of CT and MRI images, showing that learned multi-metric registration outperforms single-metric approaches based on conventional similarity measures.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Abdome/diagnóstico por imagem , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Máquina de Vetores de Suporte , Tomografia Computadorizada por Raios X
19.
Sci Rep ; 9(1): 12450, 2019 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-31462651

RESUMO

Myocardial tracking and strain estimation can non-invasively assess cardiac functioning using subject-specific MRI. As the left-ventricle does not have a uniform shape and functioning from base to apex, the development of 3D MRI has provided opportunities for simultaneous 3D tracking, and 3D strain estimation. We have extended a Local Weighted Mean (LWM) transformation function for 3D, and incorporated in a Hierarchical Template Matching model to solve 3D myocardial tracking and strain estimation problem. The LWM does not need to solve a large system of equations, provides smooth displacement of myocardial points, and adapt local geometric differences in images. Hence, 3D myocardial tracking can be performed with 1.49 mm median error, and without large error outliers. The maximum error of tracking is up to 24% reduced compared to benchmark methods. Moreover, the estimated strain can be insightful to improve 3D imaging protocols, and the computer code of LWM could also be useful for geo-spatial and manufacturing image analysis researchers.


Assuntos
Algoritmos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Modelos Cardiovasculares , Miocárdio , Humanos
20.
Med Image Anal ; 48: 117-130, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29890408

RESUMO

Graphs are widely used as a natural framework that captures interactions between individual elements represented as nodes in a graph. In medical applications, specifically, nodes can represent individuals within a potentially large population (patients or healthy controls) accompanied by a set of features, while the graph edges incorporate associations between subjects in an intuitive manner. This representation allows to incorporate the wealth of imaging and non-imaging information as well as individual subject features simultaneously in disease classification tasks. Previous graph-based approaches for supervised or unsupervised learning in the context of disease prediction solely focus on pairwise similarities between subjects, disregarding individual characteristics and features, or rather rely on subject-specific imaging feature vectors and fail to model interactions between them. In this paper, we present a thorough evaluation of a generic framework that leverages both imaging and non-imaging information and can be used for brain analysis in large populations. This framework exploits Graph Convolutional Networks (GCNs) and involves representing populations as a sparse graph, where its nodes are associated with imaging-based feature vectors, while phenotypic information is integrated as edge weights. The extensive evaluation explores the effect of each individual component of this framework on disease prediction performance and further compares it to different baselines. The framework performance is tested on two large datasets with diverse underlying data, ABIDE and ADNI, for the prediction of Autism Spectrum Disorder and conversion to Alzheimer's disease, respectively. Our analysis shows that our novel framework can improve over state-of-the-art results on both databases, with 70.4% classification accuracy for ABIDE and 80.0% for ADNI.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Transtorno do Espectro Autista/diagnóstico por imagem , Bases de Dados Factuais , Redes Neurais de Computação , Neuroimagem/métodos , Algoritmos , Humanos , Valor Preditivo dos Testes
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa