Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1150109, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37554294

RESUMO

Psychotropic drugs and transcranial magnetic stimulation (TMS) are effective for treating certain psychiatric conditions. Drugs and TMS have also been used as tools to explore the relationship between brain function and behavior in humans. Combining centrally acting drugs and TMS has proven useful for characterizing the neural basis of movement. This combined intervention approach also holds promise for improving our understanding of the mechanisms underlying disordered behavior associated with psychiatric conditions, including addiction, though challenges exist. For example, altered neocortical function has been implicated in substance use disorder, but the relationship between acute neuromodulation of neocortex with TMS and direct effects on addiction-related behaviors is not well established. We propose that the combination of human behavioral pharmacology methods with TMS can be leveraged to help establish these links. This perspective article describes an ongoing study that combines the administration of delta-9-tetrahydrocannabinol (THC), the main psychoactive compound in cannabis, with neuroimaging-guided TMS in individuals with problematic cannabis use. The study examines the impact of the left dorsolateral prefrontal cortex (DLPFC) stimulation on cognitive outcomes impacted by THC intoxication, including the subjective response to THC and the impairing effects of THC on behavioral performance. A framework for integrating TMS with human behavioral pharmacology methods, along with key details of the study design, are presented. We also discuss challenges, alternatives, and future directions.

2.
Front Neurosci ; 17: 1302132, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38130696

RESUMO

Introduction: Post-stroke dysphagia is common and associated with significant morbidity and mortality, rendering bedside screening of significant clinical importance. Using voice as a biomarker coupled with deep learning has the potential to improve patient access to screening and mitigate the subjectivity associated with detecting voice change, a component of several validated screening protocols. Methods: In this single-center study, we developed a proof-of-concept model for automated dysphagia screening and evaluated the performance of this model on training and testing cohorts. Patients were admitted to a comprehensive stroke center, where primary English speakers could follow commands without significant aphasia and participated on a rolling basis. The primary outcome was classification either as a pass or fail equivalent using a dysphagia screening test as a label. Voice data was recorded from patients who spoke a standardized set of vowels, words, and sentences from the National Institute of Health Stroke Scale. Seventy patients were recruited and 68 were included in the analysis, with 40 in training and 28 in testing cohorts, respectively. Speech from patients was segmented into 1,579 audio clips, from which 6,655 Mel-spectrogram images were computed and used as inputs for deep-learning models (DenseNet and ConvNext, separately and together). Clip-level and participant-level swallowing status predictions were obtained through a voting method. Results: The models demonstrated clip-level dysphagia screening sensitivity of 71% and specificity of 77% (F1 = 0.73, AUC = 0.80 [95% CI: 0.78-0.82]). At the participant level, the sensitivity and specificity were 89 and 79%, respectively (F1 = 0.81, AUC = 0.91 [95% CI: 0.77-1.05]). Discussion: This study is the first to demonstrate the feasibility of applying deep learning to classify vocalizations to detect post-stroke dysphagia. Our findings suggest potential for enhancing dysphagia screening in clinical settings. https://github.com/UofTNeurology/masa-open-source.

3.
Front Neurosci ; 14: 290, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32317917

RESUMO

Speech production is a hierarchical mechanism involving the synchronization of the brain and the oral articulators, where the intention of linguistic concepts is transformed into meaningful sounds. Individuals with locked-in syndrome (fully paralyzed but aware) lose their motor ability completely including articulation and even eyeball movement. The neural pathway may be the only option to resume a certain level of communication for these patients. Current brain-computer interfaces (BCIs) use patients' visual and attentional correlates to build communication, resulting in a slow communication rate (a few words per minute). Direct decoding of imagined speech from the neural signals (and then driving a speech synthesizer) has the potential for a higher communication rate. In this study, we investigated the decoding of five imagined and spoken phrases from single-trial, non-invasive magnetoencephalography (MEG) signals collected from eight adult subjects. Two machine learning algorithms were used. One was an artificial neural network (ANN) with statistical features as the baseline approach. The other was convolutional neural networks (CNNs) applied on the spatial, spectral and temporal features extracted from the MEG signals. Experimental results indicated the possibility to decode imagined and spoken phrases directly from neuromagnetic signals. CNNs were found to be highly effective with an average decoding accuracy of up to 93% for the imagined and 96% for the spoken phrases.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa