Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143
Filtrar
1.
Res Pract Thromb Haemost ; 8(5): 102468, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39139554

RESUMEN

Background: Optimal secondary prevention antithrombotic therapy for patients with antiphospholipid syndrome (APS)-associated ischemic stroke, transient ischemic attack, or other ischemic brain injury is undefined. The standard of care, warfarin or other vitamin K antagonists at standard or high intensity (international normalized ratio (INR) target range 2.0-3.0/3.0-4.0, respectively), has well-recognized limitations. Direct oral anticoagulants have several advantages over warfarin, and the potential role of high-dose direct oral anticoagulants vs high-intensity warfarin in this setting merits investigation. Objectives: The Rivaroxaban for Stroke patients with APS trial (RISAPS) seeks to determine whether high-dose rivaroxaban could represent a safe and effective alternative to high-intensity warfarin in adult patients with APS and previous ischemic stroke, transient ischemic attack, or other ischemic brain manifestations. Methods: This phase IIb prospective, randomized, controlled, noninferiority, open-label, proof-of-principle trial compares rivaroxaban 15 mg twice daily vs warfarin, target INR range 3.0-4.0. The sample size target is 40 participants. Triple antiphospholipid antibody-positive patients are excluded. The primary efficacy outcome is the rate of change in brain white matter hyperintensity volume on magnetic resonance imaging, a surrogate marker of presumed ischemic damage, between baseline and 24 months follow-up. Secondary outcomes include additional neuroradiological and clinical measures of efficacy and safety. Exploratory outcomes include high-dose rivaroxaban pharmacokinetic modeling. Conclusion: Should RISAPS demonstrate noninferior efficacy and safety of high-dose rivaroxaban in this APS subgroup, it could justify larger prospective randomized controlled trials.

2.
Commun Med (Lond) ; 4(1): 167, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39169209

RESUMEN

BACKGROUND: Predicting diabetic retinopathy (DR) progression could enable individualised screening with prompt referral for high-risk individuals for sight-saving treatment, whilst reducing screening burden for low-risk individuals. We developed and validated deep learning systems (DLS) that predict 1, 2 and 3 year emergent referable DR and maculopathy using risk factor characteristics (tabular DLS), colour fundal photographs (image DLS) or both (multimodal DLS). METHODS: From 162,339 development-set eyes from south-east London (UK) diabetic eye screening programme (DESP), 110,837 had eligible longitudinal data, with the remaining 51,502 used for pretraining. Internal and external (Birmingham DESP, UK) test datasets included 27,996, and 6928 eyes respectively. RESULTS: Internal multimodal DLS emergent referable DR, maculopathy or either area-under-the receiver operating characteristic (AUROC) were 0.95 (95% CI: 0.92-0.98), 0.84 (0.82-0.86), 0.85 (0.83-0.87) for 1 year, 0.92 (0.87-0.96), 0.84 (0.82-0.87), 0.85 (0.82-0.87) for 2 years, and 0.85 (0.80-0.90), 0.79 (0.76-0.82), 0.79 (0.76-0.82) for 3 years. External multimodal DLS emergent referable DR, maculopathy or either AUROC were 0.93 (0.88-0.97), 0.85 (0.80-0.89), 0.85 (0.76-0.85) for 1 year, 0.93 (0.89-0.97), 0.79 (0.74-0.84), 0.80 (0.76-0.85) for 2 years, and 0.91 (0.84-0.98), 0.79 (0.74-0.83), 0.79 (0.74-0.84) for 3 years. CONCLUSIONS: Multimodal and image DLS performance is significantly better than tabular DLS at all intervals. DLS accurately predict 1, 2 and 3 year emergent referable DR and referable maculopathy using colour fundal photographs, with additional risk factor characteristics conferring improvements in prognostic performance. Proposed DLS are a step towards individualised risk-based screening, whereby AI-assistance allows high-risk individuals to be closely monitored while reducing screening burden for low-risk individuals.


Diabetic retinopathy (DR) is a disease where the light-sensing layer at the back of the eye (retina) becomes damaged by raised blood sugar levels. It affects around one in three of the 463 million people with diabetes worldwide and is a leading cause of acquired vision loss in working-age adults. In this study, we developed computer-based models to predict when DR would reach a stage where vision could be threatened up to 3-years in the future. Our study shows that this system can accurately predict sight-threatening DR in patients with diabetes. This could mean fewer unnecessary visits for individuals at low-risk of DR progression, but closer monitoring and potentially earlier treatment for individuals at high-risk of DR progression, which could reduce the risk of vision loss.

3.
Med Image Anal ; 97: 103278, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39059240

RESUMEN

The last few years have seen a boom in using generative models to augment real datasets, as synthetic data can effectively model real data distributions and provide privacy-preserving, shareable datasets that can be used to train deep learning models. However, most of these methods are 2D and provide synthetic datasets that come, at most, with categorical annotations. The generation of paired images and segmentation samples that can be used in downstream, supervised segmentation tasks remains fairly uncharted territory. This work proposes a two-stage generative model capable of producing 2D and 3D semantic label maps and corresponding multi-modal images. We use a latent diffusion model for label synthesis and a VAE-GAN for semantic image synthesis. Synthetic datasets provided by this model are shown to work in a wide variety of segmentation tasks, supporting small, real datasets or fully replacing them while maintaining good performance. We also demonstrate its ability to improve downstream performance on out-of-distribution data.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Imagen Multimodal/métodos , Algoritmos , Imagenología Tridimensional/métodos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Nat Mach Intell ; 6(7): 811-819, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39055051

RESUMEN

Medical imaging research is often limited by data scarcity and availability. Governance, privacy concerns and the cost of acquisition all restrict access to medical imaging data, which, compounded by the data-hungry nature of deep learning algorithms, limits progress in the field of healthcare AI. Generative models have recently been used to synthesize photorealistic natural images, presenting a potential solution to the data scarcity problem. But are current generative models synthesizing morphologically correct samples? In this work we present a three-dimensional generative model of the human brain that is trained at the necessary scale to generate diverse, realistic-looking, high-resolution and morphologically preserving samples and conditioned on patient characteristics (for example, age and pathology). We show that the synthetic samples generated by the model preserve biological and disease phenotypes and are realistic enough to permit use downstream in well-established image analysis tools. While the proposed model has broad future applicability, such as anomaly detection and learning under limited data, its generative capabilities can be used to directly mitigate data scarcity, limited data availability and algorithmic fairness.

5.
Med Image Anal ; 95: 103207, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38776843

RESUMEN

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.


Asunto(s)
Inteligencia Artificial , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Algoritmos , Programas Informáticos
6.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347141

RESUMEN

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Semántica
7.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38347140

RESUMEN

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Asunto(s)
Inteligencia Artificial
8.
ArXiv ; 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36945687

RESUMEN

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

9.
Med Image Anal ; 92: 103058, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38104403

RESUMEN

Combining multi-site data can strengthen and uncover trends, but is a task that is marred by the influence of site-specific covariates that can bias the data and, therefore, any downstream analyses. Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios. Algorithms should be designed in a way that can account for site-specific effects, such as those that arise from sequence parameter choices, and in instances where generalisation fails, should be able to identify such a failure by means of explicit uncertainty modelling. This body of work showcases such an algorithm that can become robust to the physics of acquisition in the context of segmentation tasks while simultaneously modelling uncertainty. We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality but does so while also accounting for site-specific sequence choices, which also allows it to perform as a harmonisation tool.


Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Humanos , Incertidumbre , Imagen por Resonancia Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
10.
Br J Radiol ; 96(1150): 20220890, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38011227

RESUMEN

Federated learning (FL) is gaining wide acceptance across the medical AI domains. FL promises to provide a fairly acceptable clinical-grade accuracy, privacy, and generalisability of machine learning models across multiple institutions. However, the research on FL for medical imaging AI is still in its early stages. This paper presents a review of recent research to outline the difference between state-of-the-art [SOTA] (published literature) and state-of-the-practice [SOTP] (applied research in realistic clinical environments). Furthermore, the review outlines the future research directions considering various factors such as data, learning models, system design, governance, and human-in-loop to translate the SOTA into SOTP and effectively collaborate across multiple institutions.


Asunto(s)
Diagnóstico por Imagen , Radiología , Humanos , Radiografía , Aprendizaje Automático
11.
Med Image Anal ; 90: 102967, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37778102

RESUMEN

Any clinically-deployed image-processing pipeline must be robust to the full range of inputs it may be presented with. One popular approach to this challenge is to develop predictive models that can provide a measure of their uncertainty. Another approach is to use generative modelling to quantify the likelihood of inputs. Inputs with a low enough likelihood are deemed to be out-of-distribution and are not presented to the downstream predictive model. In this work, we evaluate several approaches to segmentation with uncertainty for the task of segmenting bleeds in 3D CT of the head. We show that these models can fail catastrophically when operating in the far out-of-distribution domain, often providing predictions that are both highly confident and wrong. We propose to instead perform out-of-distribution detection using the Latent Transformer Model: a VQ-GAN is used to provide a highly compressed latent representation of the input volume, and a transformer is then used to estimate the likelihood of this compressed representation of the input. We demonstrate this approach can identify images that are both far- and near- out-of-distribution, as well as provide spatial maps that highlight the regions considered to be out-of-distribution. Furthermore, we find a strong relationship between an image's likelihood and the quality of a model's segmentation on it, demonstrating that this approach is viable for filtering out unsuitable images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Probabilidad , Incertidumbre
12.
Alzheimers Dement (Amst) ; 15(2): e12434, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37201176

RESUMEN

INTRODUCTION: The Centiloid scale aims to harmonize amyloid beta (Aß) positron emission tomography (PET) measures across different analysis methods. As Centiloids were created using PET/computerized tomography (CT) data and are influenced by scanner differences, we investigated the Centiloid transformation with data from Insight 46 acquired with PET/magnetic resonanceimaging (MRI). METHODS: We transformed standardized uptake value ratios (SUVRs) from 432 florbetapir PET/MRI scans processed using whole cerebellum (WC) and white matter (WM) references, with and without partial volume correction. Gaussian-mixture-modelling-derived cutpoints for Aß PET positivity were converted. RESULTS: The Centiloid cutpoint was 14.2 for WC SUVRs. The relationship between WM and WC uptake differed between the calibration and testing datasets, producing implausibly low WM-based Centiloids. Linear adjustment produced a WM-based cutpoint of 18.1. DISCUSSION: Transformation of PET/MRI florbetapir data to Centiloids is valid. However, further understanding of the effects of acquisition or biological factors on the transformation using a WM reference is needed. HIGHLIGHTS: Centiloid conversion of amyloid beta positron emission tomography (PET) data aims to standardize results.Centiloid values can be influenced by differences in acquisition.We converted florbetapir PET/magnetic resonance imaging data from a large birth cohort.Whole cerebellum referenced values could be reliably transformed to Centiloids.White matter referenced values may be less generalizable between datasets.

13.
Neuroinformatics ; 21(2): 457-468, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36622500

RESUMEN

Current PET datasets are becoming larger, thereby increasing the demand for fast and reproducible processing pipelines. This paper presents a freely available, open source, Python-based software package called NiftyPAD, for versatile analyses of static, full or dual-time window dynamic brain PET data. The key novelties of NiftyPAD are the analyses of dual-time window scans with reference input processing, pharmacokinetic modelling with shortened PET acquisitions through the incorporation of arterial spin labelling (ASL)-derived relative perfusion measures, as well as optional PET data-based motion correction. Results obtained with NiftyPAD were compared with the well-established software packages PPET and QModeling for a range of kinetic models. Clinical data from eight subjects scanned with four different amyloid tracers were used to validate the computational performance. NiftyPAD achieved [Formula: see text] correlation with PPET, with absolute difference [Formula: see text] for linearised Logan and MRTM2 methods, and [Formula: see text] correlation with QModeling, with absolute difference [Formula: see text] for basis function based SRTM and SRTM2 models. For the recently published SRTM ASL method, which is unavailable in existing software packages, high correlations with negligible bias were observed with the full scan SRTM in terms of non-displaceable binding potential ([Formula: see text]), indicating reliable model implementation in NiftyPAD. Together, these findings illustrate that NiftyPAD is versatile, flexible, and produces comparable results with established software packages for quantification of dynamic PET data. It is freely available ( https://github.com/AMYPAD/NiftyPAD ), and allows for multi-platform usage. The modular setup makes adding new functionalities easy, and the package is lightweight with minimal dependencies, making it easy to use and integrate into existing processing pipelines.


Asunto(s)
Encéfalo , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen
14.
Med Image Anal ; 84: 102723, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36542907

RESUMEN

We describe CounterSynth, a conditional generative model of diffeomorphic deformations that induce label-driven, biologically plausible changes in volumetric brain images. The model is intended to synthesise counterfactual training data augmentations for downstream discriminative modelling tasks where fidelity is limited by data imbalance, distributional instability, confounding, or underspecification, and exhibits inequitable performance across distinct subpopulations. Focusing on demographic attributes, we evaluate the quality of synthesised counterfactuals with voxel-based morphometry, classification and regression of the conditioning attributes, and the Fréchet inception distance. Examining downstream discriminative performance in the context of engineered demographic imbalance and confounding, we use UK Biobank and OASIS magnetic resonance imaging data to benchmark CounterSynth augmentation against current solutions to these problems. We achieve state-of-the-art improvements, both in overall fidelity and equity. The source code for CounterSynth is available at https://github.com/guilherme-pombo/CounterSynth.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Neuroimagen
15.
Med Image Comput Comput Assist Interv ; 2023: 300-309, 2023 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-39206415

RESUMEN

Cancer is a highly heterogeneous condition best visualised in positron emission tomography. Due to this heterogeneity, a general-purpose cancer detection model can be built using unsupervised learning anomaly detection models. While prior work in this field has showcased the efficacy of abnormality detection methods (e.g. Transformer-based), these have shown significant vulnerabilities to differences in data geometry. Changes in image resolution or observed field of view can result in inaccurate predictions, even with significant data pre-processing and augmentation. We propose a new spatial conditioning mechanism that enables models to adapt and learn from varying data geometries, and apply it to a state-of-the-art Vector-Quantized Variational Autoencoder + Transformer abnormality detection model. We showcase that this spatial conditioning mechanism statistically-significantly improves model performance on whole-body data compared to the same model without conditioning, while allowing the model to perform inference at varying data geometries.

16.
IEEE Int Conf Comput Vis Workshops ; 2023: 2394-2402, 2023 Dec 25.
Artículo en Inglés | MEDLINE | ID: mdl-39205863

RESUMEN

Anomaly detection and segmentation pose an important task across sectors ranging from medical imaging analysis to industry quality control. However, current unsupervised approaches require training data to not contain any anomalies, a requirement that can be especially challenging in many medical imaging scenarios. In this paper, we propose Iterative Latent Token Masking, a self-supervised framework derived from a robust statistics point of view, translating an iterative model fitting with M-estimators to the task of anomaly detection. In doing so, this allows the training of unsupervised methods on datasets heavily contaminated with anomalous images. Our method stems from prior work on using Transformers, combined with a Vector Quantized-Variational Autoencoder, for anomaly detection, a method with state-of-the-art performance when trained on normal (non-anomalous) data. More importantly, we utilise the token masking capabilities of Transformers to filter out suspected anomalous tokens from each sample's sequence in the training set in an iterative self-supervised process, thus overcoming the difficulties of highly anomalous training data. Our work also highlights shortfalls in current state-of-the-art self-supervised, self-trained and unsupervised models when faced with small proportions of anomalous training data. We evaluate our method on whole-body PET data in addition to showing its wider application in more common computer vision tasks such as the industrial MVTec Dataset. Using varying levels of anomalous training data, our method showcases a superior performance over several state-of-the-art models, drawing attention to the potential of this approach.

17.
IEEE Access ; 11: 34595-34602, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38292346

RESUMEN

Sleep is essential for physical and mental health. Polysomnography (PSG) procedures are labour-intensive and time-consuming, making diagnosing sleep disorders difficult. Automatic sleep staging using Machine Learning (ML) - based methods has been studied extensively, but frequently provides noisier predictions incompatible with typical manually annotated hypnograms. We propose an energy optimization method to improve the quality of hypnograms generated by automatic sleep staging procedures. The method evaluates the system's total energy based on conditional probabilities for each epoch's stage and employs an energy minimisation procedure. It can be used as a meta-optimisation layer over the sleep stage sequences generated by any classifier that generates prediction probabilities. The method improved the accuracy of state-of-the-art Deep Learning models in the Sleep EDFx dataset by 4.0% and in the DRM-SUB dataset by 2.8%.

18.
Front Cardiovasc Med ; 9: 939680, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35966566

RESUMEN

Background and aims: Risk of stroke and dementia is markedly higher in people of South Asian and African Caribbean descent than white Europeans in the UK. This is unexplained by cardiovascular risk factors (CVRF). We hypothesized this might indicate accelerated early vascular aging (EVA) and that EVA might account for stronger associations between cerebral large artery characteristics and markers of small vessel disease. Methods: 360 participants in a tri-ethnic population-based study (120 per ethnic group) underwent cerebral and vertebral MRI. Length and median diameter of the basilar artery (BA) were derived from Time of Flight images, while white matter hyperintensities (WMH) volumes were obtained from T1 and FLAIR images. Associations between BA characteristics and CVRF were assessed using multivariable linear regression. Partial correlation coefficients between WMH load and BA characteristics were calculated after adjustment for CVRF and other potential confounders. Results: BA diameter was strongly associated with age in South Asians (+11.3 µm/year 95% CI = [3.05; 19.62]; p = 0.008), with unconvincing relationships in African Caribbeans (3.4 µm/year [-5.26, 12.12]; p = 0.436) or Europeans (2.6 µm/year [-5.75, 10.87]; p = 0.543). BA length was associated with age in South Asians (+0.34 mm/year [0.02; 0.65]; p = 0.037) and African Caribbeans (+0.39 mm/year [0.12; 0.65]; p = 0.005) but not Europeans (+0.08 mm/year [-0.26; 0.41]; p = 0.653). BA diameter (rho = 0.210; p = 0.022) and length (rho = 0.261; p = 0.004) were associated with frontal WMH load in South Asians (persisting after multivariable adjustment for CVRF). Conclusions: Compared with Europeans, the basilar artery undergoes more accelerated EVA in South Asians and in African Caribbeans, albeit to a lesser extent. Such EVA may contribute to the higher burden of CSVD observed in South Asians and excess risk of stroke, vascular cognitive impairment and dementia observed in these ethnic groups.

19.
Sci Rep ; 12(1): 11196, 2022 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-35778615

RESUMEN

Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Mácula Lútea , Retinopatía Diabética/diagnóstico por imagen , Humanos , Tamizaje Masivo/métodos , Retina/diagnóstico por imagen
20.
Patterns (N Y) ; 3(5): 100483, 2022 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-35607619

RESUMEN

The value of biomedical research-a $1.7 trillion annual investment-is ultimately determined by its downstream, real-world impact, whose predictability from simple citation metrics remains unquantified. Here we sought to determine the comparative predictability of future real-world translation-as indexed by inclusion in patents, guidelines, or policy documents-from complex models of title/abstract-level content versus citations and metadata alone. We quantify predictive performance out of sample, ahead of time, across major domains, using the entire corpus of biomedical research captured by Microsoft Academic Graph from 1990-2019, encompassing 43.3 million papers. We show that citations are only moderately predictive of translational impact. In contrast, high-dimensional models of titles, abstracts, and metadata exhibit high fidelity (area under the receiver operating curve [AUROC] > 0.9), generalize across time and domain, and transfer to recognizing papers of Nobel laureates. We argue that content-based impact models are superior to conventional, citation-based measures and sustain a stronger evidence-based claim to the objective measurement of translational potential.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA