Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Med Image Anal ; 97: 103301, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39146701

RESUMEN

The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. Our approach involves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomical structures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brain magnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses on identifying key features such as shapes and sizes of different brain structures. The second pretraining stage identifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placements within the brain. This dual-phase methodology significantly reduces the extensive data requirements usually necessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imaging modalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the model complexity and the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhance the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. Our pretrained models and code are at https://github.com/lab-smile/BrainSegFounder.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Neuroimagen , Humanos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Inteligencia Artificial , Encéfalo/diagnóstico por imagen , Algoritmos
2.
ArXiv ; 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39184544

RESUMEN

The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to interpret and analyze neurological data. This study introduces a novel approach towards the creation of medical foundation models by integrating a large-scale multi-modal magnetic resonance imaging (MRI) dataset derived from 41,400 participants in its own. Our method involves a novel two-stage pretraining approach using vision transformers. The first stage is dedicated to encoding anatomical structures in generally healthy brains, identifying key features such as shapes and sizes of different brain regions. The second stage concentrates on spatial information, encompassing aspects like location and the relative positioning of brain structures. We rigorously evaluate our model, BrainFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the complexity of the model and the volume of unlabeled training data derived from generally healthy brains, which enhances the accuracy and predictive capabilities of the model in complex neuroimaging tasks with MRI. The implications of this research provide transformative insights and practical applications in healthcare and make substantial steps towards the creation of foundation models for Medical AI. Our pretrained models and training code can be found at https://github.com/lab-smile/GatorBrain.

3.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38886164

RESUMEN

Morphological profiling is a valuable tool in phenotypic drug discovery. The advent of high-throughput automated imaging has enabled the capturing of a wide range of morphological features of cells or organisms in response to perturbations at the single-cell resolution. Concurrently, significant advances in machine learning and deep learning, especially in computer vision, have led to substantial improvements in analyzing large-scale high-content images at high throughput. These efforts have facilitated understanding of compound mechanism of action, drug repurposing, characterization of cell morphodynamics under perturbation, and ultimately contributing to the development of novel therapeutics. In this review, we provide a comprehensive overview of the recent advances in the field of morphological profiling. We summarize the image profiling analysis workflow, survey a broad spectrum of analysis strategies encompassing feature engineering- and deep learning-based approaches, and introduce publicly available benchmark datasets. We place a particular emphasis on the application of deep learning in this pipeline, covering cell segmentation, image representation learning, and multimodal learning. Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.


Asunto(s)
Aprendizaje Profundo , Descubrimiento de Drogas , Descubrimiento de Drogas/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático
4.
Sci Rep ; 14(1): 7710, 2024 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-38565579

RESUMEN

Alzheimer's Disease (AD) is a progressive neurodegenerative disease and the leading cause of dementia. Early diagnosis is critical for patients to benefit from potential intervention and treatment. The retina has emerged as a plausible diagnostic site for AD detection owing to its anatomical connection with the brain. However, existing AI models for this purpose have yet to provide a rational explanation behind their decisions and have not been able to infer the stage of the disease's progression. Along this direction, we propose a novel model-agnostic explainable-AI framework, called Granu la ̲ r Neuron-le v ̲ el Expl a ̲ iner (LAVA), an interpretation prototype that probes into intermediate layers of the Convolutional Neural Network (CNN) models to directly assess the continuum of AD from the retinal imaging without the need for longitudinal or clinical evaluations. This innovative approach aims to validate retinal vasculature as a biomarker and diagnostic modality for evaluating Alzheimer's Disease. Leveraged UK Biobank cognitive tests and vascular morphological features demonstrate significant promise and effectiveness of LAVA in identifying AD stages across the progression continuum.


Asunto(s)
Enfermedad de Alzheimer , Enfermedades Neurodegenerativas , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Fondo de Ojo , Retina/diagnóstico por imagen , Neuronas , Imagen por Resonancia Magnética
5.
Methods Mol Biol ; 2757: 383-445, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38668977

RESUMEN

The emergence and development of single-cell RNA sequencing (scRNA-seq) techniques enable researchers to perform large-scale analysis of the transcriptomic profiling at cell-specific resolution. Unsupervised clustering of scRNA-seq data is central for most studies, which is essential to identify novel cell types and their gene expression logics. Although an increasing number of algorithms and tools are available for scRNA-seq analysis, a practical guide for users to navigate the landscape remains underrepresented. This chapter presents an overview of the scRNA-seq data analysis pipeline, quality control, batch effect correction, data standardization, cell clustering and visualization, cluster correlation analysis, and marker gene identification. Taking the two broadly used analysis packages, i.e., Scanpy and MetaCell, as examples, we provide a hands-on guideline and comparison regarding the best practices for the above essential analysis steps and data visualization. Additionally, we compare both packages and algorithms using a scRNA-seq dataset of the ctenophore Mnemiopsis leidyi, which is representative of one of the earliest animal lineages, critical to understanding the origin and evolution of animal novelties. This pipeline can also be helpful for analyses of other taxa, especially prebilaterian animals, where these tools are under development (e.g., placozoan and Porifera).


Asunto(s)
Algoritmos , Perfilación de la Expresión Génica , Análisis de la Célula Individual , Programas Informáticos , Análisis de la Célula Individual/métodos , Animales , Perfilación de la Expresión Génica/métodos , Análisis de Secuencia de ARN/métodos , Biología Computacional/métodos , Análisis por Conglomerados , Transcriptoma/genética
6.
PLoS Comput Biol ; 20(4): e1011351, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38598563

RESUMEN

In the midst of an outbreak or sustained epidemic, reliable prediction of transmission risks and patterns of spread is critical to inform public health programs. Projections of transmission growth or decline among specific risk groups can aid in optimizing interventions, particularly when resources are limited. Phylogenetic trees have been widely used in the detection of transmission chains and high-risk populations. Moreover, tree topology and the incorporation of population parameters (phylodynamics) can be useful in reconstructing the evolutionary dynamics of an epidemic across space and time among individuals. We now demonstrate the utility of phylodynamic trees for transmission modeling and forecasting, developing a phylogeny-based deep learning system, referred to as DeepDynaForecast. Our approach leverages a primal-dual graph learning structure with shortcut multi-layer aggregation, which is suited for the early identification and prediction of transmission dynamics in emerging high-risk groups. We demonstrate the accuracy of DeepDynaForecast using simulated outbreak data and the utility of the learned model using empirical, large-scale data from the human immunodeficiency virus epidemic in Florida between 2012 and 2020. Our framework is available as open-source software (MIT license) at github.com/lab-smile/DeepDynaForcast.


Asunto(s)
Biología Computacional , Aprendizaje Profundo , Epidemias , Filogenia , Humanos , Epidemias/estadística & datos numéricos , Biología Computacional/métodos , Infecciones por VIH/transmisión , Infecciones por VIH/epidemiología , Programas Informáticos , Florida/epidemiología , Algoritmos , Simulación por Computador , Brotes de Enfermedades/estadística & datos numéricos
7.
Artículo en Inglés | MEDLINE | ID: mdl-38465203

RESUMEN

Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields, particularly in non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community at https://github.com/lab-smile/GRACE.

8.
PLoS Comput Biol ; 20(3): e1011943, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38547053

RESUMEN

Recent neuroimaging studies have shown that the visual cortex plays an important role in representing the affective significance of visual input. The origin of these affect-specific visual representations is debated: they are intrinsic to the visual system versus they arise through reentry from frontal emotion processing structures such as the amygdala. We examined this problem by combining convolutional neural network (CNN) models of the human ventral visual cortex pre-trained on ImageNet with two datasets of affective images. Our results show that in all layers of the CNN models, there were artificial neurons that responded consistently and selectively to neutral, pleasant, or unpleasant images and lesioning these neurons by setting their output to zero or enhancing these neurons by increasing their gain led to decreased or increased emotion recognition performance respectively. These results support the idea that the visual system may have the intrinsic ability to represent the affective significance of visual input and suggest that CNNs offer a fruitful platform for testing neuroscientific theories.


Asunto(s)
Redes Neurales de la Computación , Corteza Visual , Humanos , Corteza Visual/fisiología , Neuroimagen , Neuronas/fisiología , Reconocimiento en Psicología
9.
Sci Rep ; 14(1): 3637, 2024 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-38351326

RESUMEN

Parkinson's disease is the world's fastest-growing neurological disorder. Research to elucidate the mechanisms of Parkinson's disease and automate diagnostics would greatly improve the treatment of patients with Parkinson's disease. Current diagnostic methods are expensive and have limited availability. Considering the insidious and preclinical onset and progression of the disease, a desirable screening should be diagnostically accurate even before the onset of symptoms to allow medical interventions. We highlight retinal fundus imaging, often termed a window to the brain, as a diagnostic screening modality for Parkinson's disease. We conducted a systematic evaluation of conventional machine learning and deep learning techniques to classify Parkinson's disease from UK Biobank fundus imaging. Our results suggest Parkinson's disease individuals can be differentiated from age and gender-matched healthy subjects with 68% accuracy. This accuracy is maintained when predicting either prevalent or incident Parkinson's disease. Explainability and trustworthiness are enhanced by visual attribution maps of localized biomarkers and quantified metrics of model robustness to data perturbations.


Asunto(s)
Aprendizaje Profundo , Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico por imagen , Enfermedad de Parkinson/epidemiología , Biobanco del Reino Unido , Bancos de Muestras Biológicas , Fondo de Ojo
10.
ArXiv ; 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38168460

RESUMEN

Morphological profiling is a valuable tool in phenotypic drug discovery. The advent of high-throughput automated imaging has enabled the capturing of a wide range of morphological features of cells or organisms in response to perturbations at the single-cell resolution. Concurrently, significant advances in machine learning and deep learning, especially in computer vision, have led to substantial improvements in analyzing large-scale high-content images at high-throughput. These efforts have facilitated understanding of compound mechanism-of-action (MOA), drug repurposing, characterization of cell morphodynamics under perturbation, and ultimately contributing to the development of novel therapeutics. In this review, we provide a comprehensive overview of the recent advances in the field of morphological profiling. We summarize the image profiling analysis workflow, survey a broad spectrum of analysis strategies encompassing feature engineering- and deep learning-based approaches, and introduce publicly available benchmark datasets. We place a particular emphasis on the application of deep learning in this pipeline, covering cell segmentation, image representation learning, and multimodal learning. Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.

11.
bioRxiv ; 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-37163104

RESUMEN

Recent neuroimaging studies have shown that the visual cortex plays an important role in representing the affective significance of visual input. The origin of these affect-specific visual representations is debated: they are intrinsic to the visual system versus they arise through reentry from frontal emotion processing structures such as the amygdala. We examined this problem by combining convolutional neural network (CNN) models of the human ventral visual cortex pre-trained on ImageNet with two datasets of affective images. Our results show that (1) in all layers of the CNN models, there were artificial neurons that responded consistently and selectively to neutral, pleasant, or unpleasant images and (2) lesioning these neurons by setting their output to 0 or enhancing these neurons by increasing their gain led to decreased or increased emotion recognition performance respectively. These results support the idea that the visual system may have the intrinsic ability to represent the affective significance of visual input and suggest that CNNs offer a fruitful platform for testing neuroscientific theories.

12.
NPJ Digit Med ; 6(1): 211, 2023 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-37978250

RESUMEN

While machine learning (ML) has shown great promise in medical diagnostics, a major challenge is that ML models do not always perform equally well among ethnic groups. This is alarming for women's health, as there are already existing health disparities that vary by ethnicity. Bacterial Vaginosis (BV) is a common vaginal syndrome among women of reproductive age and has clear diagnostic differences among ethnic groups. Here, we investigate the ability of four ML algorithms to diagnose BV. We determine the fairness in the prediction of asymptomatic BV using 16S rRNA sequencing data from Asian, Black, Hispanic, and white women. General purpose ML model performances vary based on ethnicity. When evaluating the metric of false positive or false negative rate, we find that models perform least effectively for Hispanic and Asian women. Models generally have the highest performance for white women and the lowest for Asian women. These findings demonstrate a need for improved methodologies to increase model fairness for predicting BV.

13.
Brain Stimul ; 16(3): 969-974, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37279860

RESUMEN

BACKGROUND: Transcranial direct current stimulation (tDCS) paired with cognitive training (CT) is widely investigated as a therapeutic tool to enhance cognitive function in older adults with and without neurodegenerative disease. Prior research demonstrates that the level of benefit from tDCS paired with CT varies from person to person, likely due to individual differences in neuroanatomical structure. OBJECTIVE: The current study aims to develop a method to objectively optimize and personalize current dosage to maximize the functional gains of non-invasive brain stimulation. METHODS: A support vector machine (SVM) model was trained to predict treatment response based on computational models of current density in a sample dataset (n = 14). Feature weights of the deployed SVM were used in a weighted Gaussian Mixture Model (GMM) to maximize the likelihood of converting tDCS non-responders to responders by finding the most optimum electrode montage and applied current intensity (optimized models). RESULTS: Current distributions optimized by the proposed SVM-GMM model demonstrated 93% voxel-wise coherence within target brain regions between the originally non-responders and responders. The optimized current distribution in original non-responders was 3.38 standard deviations closer to the current dose of responders compared to the pre-optimized models. Optimized models also achieved an average treatment response likelihood and normalized mutual information of 99.993% and 91.21%, respectively. Following tDCS dose optimization, the SVM model successfully predicted all tDCS non-responders with optimized doses as responders. CONCLUSIONS: The results of this study serve as a foundation for a custom dose optimization strategy towards precision medicine in tDCS to improve outcomes in cognitive decline remediation for older adults.


Asunto(s)
Enfermedades Neurodegenerativas , Estimulación Transcraneal de Corriente Directa , Humanos , Anciano , Estimulación Transcraneal de Corriente Directa/métodos , Cognición , Encéfalo/fisiología , Electrodos
14.
15.
Softw Impacts ; 152023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37091721

RESUMEN

Deep learning has achieved the state-of-the-art performance across medical imaging tasks; however, model calibration is often not considered. Uncalibrated models are potentially dangerous in high-risk applications since the user does not know when they will fail. Therefore, this paper proposes a novel domain-aware loss function to calibrate deep learning models. The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain. Thus, the approach improves the calibration while also ensuring that the model makes less risky errors even when incorrect. The code for this software is available at https://github.com/lab-smile/DOMINO.

16.
Neurobiol Aging ; 121: 166-178, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36455492

RESUMEN

Extracellular amyloid plaques in gray matter are the earliest pathological marker for Alzheimer's disease (AD), followed by abnormal tau protein accumulation. The link between diffusion changes in gray matter, amyloid and tau pathology, and cognitive decline is not well understood. We first performed cross-sectional analyses on T1-weighted imaging, diffusion MRI, and amyloid and tau PETs from the ADNI 2/3 database. We evaluated cortical volume, free-water, fractional anisotropy (FA), and amyloid and tau SUVRs in 171 cognitively normal, 103 MCI, and 44 AD individuals. When the 3 groups were combined, increasing amyloid burden was associated with reduced extracellular free-water in the entorhinal cortex and hippocampus in those with amyloid-negative status whereas increasing tau burden was associated with increased extracellular free-water regardless of amyloid status. Next, we found that for the MCI subjects, diffusion measures (free-water, FA) alone predicted MMSE score 2 years later with a high r-square value (87%), as compared to tau SUVRs (27%), T1 volume (36%), and amyloid SUVRs (75%). Diffusion measures represent a potent non-invasive marker for predicting cognitive decline.


Asunto(s)
Enfermedad de Alzheimer , Amiloidosis , Disfunción Cognitiva , Humanos , Proteínas tau/metabolismo , Péptidos beta-Amiloides/metabolismo , Sustancia Gris/patología , Estudios Transversales , Disfunción Cognitiva/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Amiloide/metabolismo , Proteínas Amiloidogénicas/metabolismo , Imagen de Difusión por Resonancia Magnética , Biomarcadores , Agua
17.
BMC Med Inform Decis Mak ; 22(Suppl 3): 255, 2022 09 27.
Artículo en Inglés | MEDLINE | ID: mdl-36167551

RESUMEN

BACKGROUND: Diabetic retinopathy (DR) is a leading cause of blindness in American adults. If detected, DR can be treated to prevent further damage causing blindness. There is an increasing interest in developing artificial intelligence (AI) technologies to help detect DR using electronic health records. The lesion-related information documented in fundus image reports is a valuable resource that could help diagnoses of DR in clinical decision support systems. However, most studies for AI-based DR diagnoses are mainly based on medical images; there is limited studies to explore the lesion-related information captured in the free text image reports. METHODS: In this study, we examined two state-of-the-art transformer-based natural language processing (NLP) models, including BERT and RoBERTa, compared them with a recurrent neural network implemented using Long short-term memory (LSTM) to extract DR-related concepts from clinical narratives. We identified four different categories of DR-related clinical concepts including lesions, eye parts, laterality, and severity, developed annotation guidelines, annotated a DR-corpus of 536 image reports, and developed transformer-based NLP models for clinical concept extraction and relation extraction. We also examined the relation extraction under two settings including 'gold-standard' setting-where gold-standard concepts were used-and end-to-end setting. RESULTS: For concept extraction, the BERT model pretrained with the MIMIC III dataset achieve the best performance (0.9503 and 0.9645 for strict/lenient evaluation). For relation extraction, BERT model pretrained using general English text achieved the best strict/lenient F1-score of 0.9316. The end-to-end system, BERT_general_e2e, achieved the best strict/lenient F1-score of 0.8578 and 0.8881, respectively. Another end-to-end system based on the RoBERTa architecture, RoBERTa_general_e2e, also achieved the same performance as BERT_general_e2e in strict scores. CONCLUSIONS: This study demonstrated the efficiency of transformer-based NLP models for clinical concept extraction and relation extraction. Our results show that it's necessary to pretrain transformer models using clinical text to optimize the performance for clinical concept extraction. Whereas, for relation extraction, transformers pretrained using general English text perform better.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Inteligencia Artificial , Ceguera , Retinopatía Diabética/diagnóstico , Registros Electrónicos de Salud , Humanos , Procesamiento de Lenguaje Natural
18.
Front Radiol ; 2: 904601, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37492656

RESUMEN

A body of studies has proposed to obtain high-quality images from low-dose and noisy Computed Tomography (CT) scans for radiation reduction. However, these studies are designed for population-level data without considering the variation in CT devices and individuals, limiting the current approaches' performance, especially for ultra-low-dose CT imaging. Here, we proposed PIMA-CT, a physical anthropomorphic phantom model integrating an unsupervised learning framework, using a novel deep learning technique called Cyclic Simulation and Denoising (CSD), to address these limitations. We first acquired paired low-dose and standard-dose CT scans of the phantom and then developed two generative neural networks: noise simulator and denoiser. The simulator extracts real low-dose noise and tissue features from two separate image spaces (e.g., low-dose phantom model scans and standard-dose patient scans) into a unified feature space. Meanwhile, the denoiser provides feedback to the simulator on the quality of the generated noise. In this way, the simulator and denoiser cyclically interact to optimize network learning and ease the denoiser to simultaneously remove noise and restore tissue features. We thoroughly evaluate our method for removing both real low-dose noise and Gaussian simulated low-dose noise. The results show that CSD outperforms one of the state-of-the-art denoising algorithms without using any labeled data (actual patients' low-dose CT scans) nor simulated low-dose CT scans. This study may shed light on incorporating physical models in medical imaging, especially for ultra-low level dose CT scans restoration.

19.
Front Aging Neurosci ; 13: 758298, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34950021

RESUMEN

Background and Objectives: Prediction of decline to dementia using objective biomarkers in high-risk patients with amnestic mild cognitive impairment (aMCI) has immense utility. Our objective was to use multimodal MRI to (1) determine whether accurate and precise prediction of dementia conversion could be achieved using baseline data alone, and (2) generate a map of the brain regions implicated in longitudinal decline to dementia. Methods: Participants meeting criteria for aMCI at baseline (N = 55) were classified at follow-up as remaining stable/improved in their diagnosis (N = 41) or declined to dementia (N = 14). Baseline T1 structural MRI and resting-state fMRI (rsfMRI) were combined and a semi-supervised support vector machine (SVM) which separated stable participants from those who decline at follow-up with maximal margin. Cross-validated model performance metrics and MRI feature weights were calculated to include the strength of each brain voxel in its ability to distinguish the two groups. Results: Total model accuracy for predicting diagnostic change at follow-up was 92.7% using baseline T1 imaging alone, 83.5% using rsfMRI alone, and 94.5% when combining T1 and rsfMRI modalities. Feature weights that survived the p < 0.01 threshold for separation of the two groups revealed the strongest margin in the combined structural and functional regions underlying the medial temporal lobes in the limbic system. Discussion: An MRI-driven SVM model demonstrates accurate and precise prediction of later dementia conversion in aMCI patients. The multi-modal regions driving this prediction were the strongest in the medial temporal regions of the limbic system, consistent with literature on the progression of Alzheimer's disease.

20.
Neuroimage ; 245: 118710, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34780917

RESUMEN

In addition to the well-established somatotopy in the pre- and post-central gyrus, there is now strong evidence that somatotopic organization is evident across other regions in the sensorimotor network. This raises several experimental questions: To what extent is activity in the sensorimotor network effector-dependent and effector-independent? How important is the sensorimotor cortex when predicting the motor effector? Is there redundancy in the distributed somatotopically organized network such that removing one region has little impact on classification accuracy? To answer these questions, we developed a novel experimental approach. fMRI data were collected while human subjects performed a precisely controlled force generation task separately with their hand, foot, and mouth. We used a simple linear iterative clustering (SLIC) algorithm to segment whole-brain beta coefficient maps to build an adaptive brain parcellation and then classified effectors using extreme gradient boosting (XGBoost) based on parcellations at various spatial resolutions. This allowed us to understand how data-driven adaptive brain parcellation granularity altered classification accuracy. Results revealed effector-dependent activity in regions of the post-central gyrus, precentral gyrus, and paracentral lobule. SMA, regions of the inferior and superior parietal lobule, and cerebellum each contained effector-dependent and effector-independent representations. Machine learning analyses showed that increasing the spatial resolution of the data-driven model increased classification accuracy, which reached 94% with 1755 supervoxels. Our SLIC-based supervoxel parcellation outperformed classification analyses using established brain templates and random simulations. Occlusion experiments further demonstrated redundancy across the sensorimotor network when classifying effectors. Our observations extend our understanding of effector-dependent and effector-independent organization within the human brain and provide new insight into the functional neuroanatomy required to predict the motor effector used in a motor control task.


Asunto(s)
Mapeo Encefálico/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Corteza Sensoriomotora/diagnóstico por imagen , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA