Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 137
Filtrar
1.
Med Image Anal ; 95: 103207, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38776843

RESUMEN

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.

2.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347140

RESUMEN

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Asunto(s)
Inteligencia Artificial
3.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347141

RESUMEN

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Semántica
4.
ArXiv ; 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36945687

RESUMEN

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

5.
Med Image Anal ; 92: 103058, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38104403

RESUMEN

Combining multi-site data can strengthen and uncover trends, but is a task that is marred by the influence of site-specific covariates that can bias the data and, therefore, any downstream analyses. Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios. Algorithms should be designed in a way that can account for site-specific effects, such as those that arise from sequence parameter choices, and in instances where generalisation fails, should be able to identify such a failure by means of explicit uncertainty modelling. This body of work showcases such an algorithm that can become robust to the physics of acquisition in the context of segmentation tasks while simultaneously modelling uncertainty. We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality but does so while also accounting for site-specific sequence choices, which also allows it to perform as a harmonisation tool.


Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Humanos , Incertidumbre , Imagen por Resonancia Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
6.
Br J Radiol ; 96(1150): 20220890, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38011227

RESUMEN

Federated learning (FL) is gaining wide acceptance across the medical AI domains. FL promises to provide a fairly acceptable clinical-grade accuracy, privacy, and generalisability of machine learning models across multiple institutions. However, the research on FL for medical imaging AI is still in its early stages. This paper presents a review of recent research to outline the difference between state-of-the-art [SOTA] (published literature) and state-of-the-practice [SOTP] (applied research in realistic clinical environments). Furthermore, the review outlines the future research directions considering various factors such as data, learning models, system design, governance, and human-in-loop to translate the SOTA into SOTP and effectively collaborate across multiple institutions.


Asunto(s)
Diagnóstico por Imagen , Radiología , Humanos , Radiografía , Aprendizaje Automático
7.
Med Image Anal ; 90: 102967, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37778102

RESUMEN

Any clinically-deployed image-processing pipeline must be robust to the full range of inputs it may be presented with. One popular approach to this challenge is to develop predictive models that can provide a measure of their uncertainty. Another approach is to use generative modelling to quantify the likelihood of inputs. Inputs with a low enough likelihood are deemed to be out-of-distribution and are not presented to the downstream predictive model. In this work, we evaluate several approaches to segmentation with uncertainty for the task of segmenting bleeds in 3D CT of the head. We show that these models can fail catastrophically when operating in the far out-of-distribution domain, often providing predictions that are both highly confident and wrong. We propose to instead perform out-of-distribution detection using the Latent Transformer Model: a VQ-GAN is used to provide a highly compressed latent representation of the input volume, and a transformer is then used to estimate the likelihood of this compressed representation of the input. We demonstrate this approach can identify images that are both far- and near- out-of-distribution, as well as provide spatial maps that highlight the regions considered to be out-of-distribution. Furthermore, we find a strong relationship between an image's likelihood and the quality of a model's segmentation on it, demonstrating that this approach is viable for filtering out unsuitable images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Probabilidad , Incertidumbre
8.
Alzheimers Dement (Amst) ; 15(2): e12434, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37201176

RESUMEN

INTRODUCTION: The Centiloid scale aims to harmonize amyloid beta (Aß) positron emission tomography (PET) measures across different analysis methods. As Centiloids were created using PET/computerized tomography (CT) data and are influenced by scanner differences, we investigated the Centiloid transformation with data from Insight 46 acquired with PET/magnetic resonanceimaging (MRI). METHODS: We transformed standardized uptake value ratios (SUVRs) from 432 florbetapir PET/MRI scans processed using whole cerebellum (WC) and white matter (WM) references, with and without partial volume correction. Gaussian-mixture-modelling-derived cutpoints for Aß PET positivity were converted. RESULTS: The Centiloid cutpoint was 14.2 for WC SUVRs. The relationship between WM and WC uptake differed between the calibration and testing datasets, producing implausibly low WM-based Centiloids. Linear adjustment produced a WM-based cutpoint of 18.1. DISCUSSION: Transformation of PET/MRI florbetapir data to Centiloids is valid. However, further understanding of the effects of acquisition or biological factors on the transformation using a WM reference is needed. HIGHLIGHTS: Centiloid conversion of amyloid beta positron emission tomography (PET) data aims to standardize results.Centiloid values can be influenced by differences in acquisition.We converted florbetapir PET/magnetic resonance imaging data from a large birth cohort.Whole cerebellum referenced values could be reliably transformed to Centiloids.White matter referenced values may be less generalizable between datasets.

9.
Neuroinformatics ; 21(2): 457-468, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36622500

RESUMEN

Current PET datasets are becoming larger, thereby increasing the demand for fast and reproducible processing pipelines. This paper presents a freely available, open source, Python-based software package called NiftyPAD, for versatile analyses of static, full or dual-time window dynamic brain PET data. The key novelties of NiftyPAD are the analyses of dual-time window scans with reference input processing, pharmacokinetic modelling with shortened PET acquisitions through the incorporation of arterial spin labelling (ASL)-derived relative perfusion measures, as well as optional PET data-based motion correction. Results obtained with NiftyPAD were compared with the well-established software packages PPET and QModeling for a range of kinetic models. Clinical data from eight subjects scanned with four different amyloid tracers were used to validate the computational performance. NiftyPAD achieved [Formula: see text] correlation with PPET, with absolute difference [Formula: see text] for linearised Logan and MRTM2 methods, and [Formula: see text] correlation with QModeling, with absolute difference [Formula: see text] for basis function based SRTM and SRTM2 models. For the recently published SRTM ASL method, which is unavailable in existing software packages, high correlations with negligible bias were observed with the full scan SRTM in terms of non-displaceable binding potential ([Formula: see text]), indicating reliable model implementation in NiftyPAD. Together, these findings illustrate that NiftyPAD is versatile, flexible, and produces comparable results with established software packages for quantification of dynamic PET data. It is freely available ( https://github.com/AMYPAD/NiftyPAD ), and allows for multi-platform usage. The modular setup makes adding new functionalities easy, and the package is lightweight with minimal dependencies, making it easy to use and integrate into existing processing pipelines.


Asunto(s)
Encéfalo , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen
10.
Med Image Anal ; 84: 102723, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36542907

RESUMEN

We describe CounterSynth, a conditional generative model of diffeomorphic deformations that induce label-driven, biologically plausible changes in volumetric brain images. The model is intended to synthesise counterfactual training data augmentations for downstream discriminative modelling tasks where fidelity is limited by data imbalance, distributional instability, confounding, or underspecification, and exhibits inequitable performance across distinct subpopulations. Focusing on demographic attributes, we evaluate the quality of synthesised counterfactuals with voxel-based morphometry, classification and regression of the conditioning attributes, and the Fréchet inception distance. Examining downstream discriminative performance in the context of engineered demographic imbalance and confounding, we use UK Biobank and OASIS magnetic resonance imaging data to benchmark CounterSynth augmentation against current solutions to these problems. We achieve state-of-the-art improvements, both in overall fidelity and equity. The source code for CounterSynth is available at https://github.com/guilherme-pombo/CounterSynth.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Neuroimagen
11.
IEEE Access ; 11: 34595-34602, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38292346

RESUMEN

Sleep is essential for physical and mental health. Polysomnography (PSG) procedures are labour-intensive and time-consuming, making diagnosing sleep disorders difficult. Automatic sleep staging using Machine Learning (ML) - based methods has been studied extensively, but frequently provides noisier predictions incompatible with typical manually annotated hypnograms. We propose an energy optimization method to improve the quality of hypnograms generated by automatic sleep staging procedures. The method evaluates the system's total energy based on conditional probabilities for each epoch's stage and employs an energy minimisation procedure. It can be used as a meta-optimisation layer over the sleep stage sequences generated by any classifier that generates prediction probabilities. The method improved the accuracy of state-of-the-art Deep Learning models in the Sleep EDFx dataset by 4.0% and in the DRM-SUB dataset by 2.8%.

12.
Front Cardiovasc Med ; 9: 939680, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35966566

RESUMEN

Background and aims: Risk of stroke and dementia is markedly higher in people of South Asian and African Caribbean descent than white Europeans in the UK. This is unexplained by cardiovascular risk factors (CVRF). We hypothesized this might indicate accelerated early vascular aging (EVA) and that EVA might account for stronger associations between cerebral large artery characteristics and markers of small vessel disease. Methods: 360 participants in a tri-ethnic population-based study (120 per ethnic group) underwent cerebral and vertebral MRI. Length and median diameter of the basilar artery (BA) were derived from Time of Flight images, while white matter hyperintensities (WMH) volumes were obtained from T1 and FLAIR images. Associations between BA characteristics and CVRF were assessed using multivariable linear regression. Partial correlation coefficients between WMH load and BA characteristics were calculated after adjustment for CVRF and other potential confounders. Results: BA diameter was strongly associated with age in South Asians (+11.3 µm/year 95% CI = [3.05; 19.62]; p = 0.008), with unconvincing relationships in African Caribbeans (3.4 µm/year [-5.26, 12.12]; p = 0.436) or Europeans (2.6 µm/year [-5.75, 10.87]; p = 0.543). BA length was associated with age in South Asians (+0.34 mm/year [0.02; 0.65]; p = 0.037) and African Caribbeans (+0.39 mm/year [0.12; 0.65]; p = 0.005) but not Europeans (+0.08 mm/year [-0.26; 0.41]; p = 0.653). BA diameter (rho = 0.210; p = 0.022) and length (rho = 0.261; p = 0.004) were associated with frontal WMH load in South Asians (persisting after multivariable adjustment for CVRF). Conclusions: Compared with Europeans, the basilar artery undergoes more accelerated EVA in South Asians and in African Caribbeans, albeit to a lesser extent. Such EVA may contribute to the higher burden of CSVD observed in South Asians and excess risk of stroke, vascular cognitive impairment and dementia observed in these ethnic groups.

13.
Sci Rep ; 12(1): 11196, 2022 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-35778615

RESUMEN

Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Mácula Lútea , Retinopatía Diabética/diagnóstico por imagen , Humanos , Tamizaje Masivo/métodos , Retina/diagnóstico por imagen
14.
Patterns (N Y) ; 3(5): 100483, 2022 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-35607619

RESUMEN

The value of biomedical research-a $1.7 trillion annual investment-is ultimately determined by its downstream, real-world impact, whose predictability from simple citation metrics remains unquantified. Here we sought to determine the comparative predictability of future real-world translation-as indexed by inclusion in patents, guidelines, or policy documents-from complex models of title/abstract-level content versus citations and metadata alone. We quantify predictive performance out of sample, ahead of time, across major domains, using the entire corpus of biomedical research captured by Microsoft Academic Graph from 1990-2019, encompassing 43.3 million papers. We show that citations are only moderately predictive of translational impact. In contrast, high-dimensional models of titles, abstracts, and metadata exhibit high fidelity (area under the receiver operating curve [AUROC] > 0.9), generalize across time and domain, and transfer to recognizing papers of Nobel laureates. We argue that content-based impact models are superior to conventional, citation-based measures and sustain a stronger evidence-based claim to the objective measurement of translational potential.

15.
Med Image Anal ; 79: 102475, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35598520

RESUMEN

Pathological brain appearances may be so heterogeneous as to be intelligible only as anomalies, defined by their deviation from normality rather than any specific set of pathological features. Amongst the hardest tasks in medical imaging, detecting such anomalies requires models of the normal brain that combine compactness with the expressivity of the complex, long-range interactions that characterise its structural organisation. These are requirements transformers have arguably greater potential to satisfy than other current candidate architectures, but their application has been inhibited by their demands on data and computational resources. Here we combine the latent representation of vector quantised variational autoencoders with an ensemble of autoregressive transformers to enable unsupervised anomaly detection and segmentation defined by deviation from healthy brain imaging data, achievable at low computational cost, within relative modest data regimes. We compare our method to current state-of-the-art approaches across a series of experiments with 2D and 3D data involving synthetic and real pathological lesions. On real lesions, we train our models on 15,000 radiologically normal participants from UK Biobank and evaluate performance on four different brain MR datasets with small vessel disease, demyelinating lesions, and tumours. We demonstrate superior anomaly detection performance both image-wise and pixel/voxel-wise, achievable without post-processing. These results draw attention to the potential of transformers in this most challenging of imaging tasks.


Asunto(s)
Encefalopatías , Encéfalo , Encéfalo/diagnóstico por imagen , Humanos , Neuroimagen
16.
PLOS Glob Public Health ; 2(1): e0000028, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36962066

RESUMEN

Symptomatic testing programmes are crucial to the COVID-19 pandemic response. We sought to examine United Kingdom (UK) testing rates amongst individuals with test-qualifying symptoms, and factors associated with not testing. We analysed a cohort of untested symptomatic app users (N = 1,237), nested in the Zoe COVID Symptom Study (Zoe, N = 4,394,948); and symptomatic respondents who wanted, but did not have a test (N = 1,956), drawn from a University of Maryland survey administered to Facebook users (The Global COVID-19 Trends and Impact Survey [CTIS], N = 775,746). The proportion tested among individuals with incident test-qualifying symptoms rose from ~20% to ~75% from April to December 2020 in Zoe. Testing was lower with one vs more symptoms (72.9% vs 84.6% p<0.001), or short vs long symptom duration (69.9% vs 85.4% p<0.001). 40.4% of survey respondents did not identify all three test-qualifying symptoms. Symptom identification decreased for every decade older (OR = 0.908 [95% CI 0.883-0.933]). Amongst symptomatic UMD-CTIS respondents who wanted but did not have a test, not knowing where to go was the most cited factor (32.4%); this increased for each decade older (OR = 1.207 [1.129-1.292]) and for every 4-years fewer in education (OR = 0.685 [0.599-0.783]). Despite current UK messaging on COVID-19 testing, there is a knowledge gap about when and where to test, and this may be contributing to the ~25% testing gap. Risk factors, including older age and less education, highlight potential opportunities to tailor public health messages. The testing gap may be ever larger in countries that do not have extensive, free testing, as the UK does.

17.
Brain Commun ; 3(4): fcab226, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34661106

RESUMEN

MRI-derived features of presumed cerebral small vessel disease are frequently found in Alzheimer's disease. Influences of such markers on disease-progression measures are poorly understood. We measured markers of presumed small vessel disease (white matter hyperintensity volumes; cerebral microbleeds) on baseline images of newly enrolled individuals in the Alzheimer's Disease Neuroimaging Initiative cohort (GO and 2) and used linear mixed models to relate these to subsequent atrophy and neuropsychological score change. We also assessed heterogeneity in white matter hyperintensity positioning within biomarker abnormality sequences, driven by the data, using the Subtype and Stage Inference algorithm. This study recruited both sexes and included: controls: [n = 159, mean(SD) age = 74(6) years]; early and late mild cognitive impairment [ns = 265 and 139, respectively, mean(SD) ages =71(7) and 72(8) years, respectively]; Alzheimer's disease [n = 103, mean(SD) age = 75(8)] and significant memory concern [n = 72, mean(SD) age = 72(6) years]. Baseline demographic and vascular risk-factor data, and longitudinal cognitive scores (Mini-Mental State Examination; logical memory; and Trails A and B) were collected. Whole-brain and hippocampal volume change metrics were calculated. White matter hyperintensity volumes were associated with greater whole-brain and hippocampal volume changes independently of cerebral microbleeds (a doubling of baseline white matter hyperintensity was associated with an increase in atrophy rate of 0.3 ml/year for brain and 0.013 ml/year for hippocampus). Cerebral microbleeds were found in 15% of individuals and the presence of a microbleed, as opposed to none, was associated with increases in atrophy rate of 1.4 ml/year for whole brain and 0.021 ml/year for hippocampus. White matter hyperintensities were predictive of greater decline in all neuropsychological scores, while cerebral microbleeds were predictive of decline in logical memory (immediate recall) and Mini-Mental State Examination scores. We identified distinct groups with specific sequences of biomarker abnormality using continuous baseline measures and brain volume change. Four clusters were found; Group 1 showed early Alzheimer's pathology; Group 2 showed early neurodegeneration; Group 3 had early mixed Alzheimer's and cerebrovascular pathology; Group 4 had early neuropsychological score abnormalities. White matter hyperintensity volumes becoming abnormal was a late event for Groups 1 and 4 and an early event for 2 and 3. In summary, white matter hyperintensities and microbleeds were independently associated with progressive neurodegeneration (brain atrophy rates) and cognitive decline (change in neuropsychological scores). Mechanisms involving white matter hyperintensities and progression and microbleeds and progression may be partially separate. Distinct sequences of biomarker progression were found. White matter hyperintensity development was an early event in two sequences.

18.
Neurology ; 97(21): 989-999, 2021 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-34607924

RESUMEN

Patients with multiple sclerosis (MS) have heterogeneous clinical presentations, symptoms, and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using MRI. First, development of validated MS-specific image analysis methods can be boosted by verified reference, test, and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic, and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy, or functional network changes) to large multidomain datasets (imaging, cognition, clinical disability, genetics). After reviewing data sharing and artificial intelligence, we highlight 3 areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging, and the understanding of MS.


Asunto(s)
Inteligencia Artificial , Esclerosis Múltiple , Algoritmos , Humanos , Difusión de la Información , Imagen por Resonancia Magnética , Esclerosis Múltiple/diagnóstico por imagen
20.
Neuroradiology ; 63(12): 2047-2056, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34047805

RESUMEN

PURPOSE: Surveillance of patients with high-grade glioma (HGG) and identification of disease progression remain a major challenge in neurooncology. This study aimed to develop a support vector machine (SVM) classifier, employing combined longitudinal structural and perfusion MRI studies, to classify between stable disease, pseudoprogression and progressive disease (3-class problem). METHODS: Study participants were separated into two groups: group I (total cohort: 64 patients) with a single DSC time point and group II (19 patients) with longitudinal DSC time points (2-3). We retrospectively analysed 269 structural MRI and 92 dynamic susceptibility contrast perfusion (DSC) MRI scans. The SVM classifier was trained using all available MRI studies for each group. Classification accuracy was assessed for different feature dataset and time point combinations and compared to radiologists' classifications. RESULTS: SVM classification based on combined perfusion and structural features outperformed radiologists' classification across all groups. For the identification of progressive disease, use of combined features and longitudinal DSC time points improved classification performance (lowest error rate 1.6%). Optimal performance was observed in group II (multiple time points) with SVM sensitivity/specificity/accuracy of 100/91.67/94.7% (first time point analysis) and 85.71/100/94.7% (longitudinal analysis), compared to 60/78/68% and 70/90/84.2% for the respective radiologist classifications. In group I (single time point), the SVM classifier also outperformed radiologists' classifications with sensitivity/specificity/accuracy of 86.49/75.00/81.53% (SVM) compared to 75.7/68.9/73.84% (radiologists). CONCLUSION: Our results indicate that utilisation of a machine learning (SVM) classifier based on analysis of longitudinal perfusion time points and combined structural and perfusion features significantly enhances classification outcome (p value= 0.0001).


Asunto(s)
Neoplasias Encefálicas , Glioma , Neoplasias Encefálicas/diagnóstico por imagen , Glioma/diagnóstico por imagen , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Perfusión , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...