Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.908
Filtrar
1.
Sci Rep ; 14(1): 11054, 2024 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744976

RESUMO

Brain machine interfaces (BMIs) can substantially improve the quality of life of elderly or disabled people. However, performing complex action sequences with a BMI system is onerous because it requires issuing commands sequentially. Fundamentally different from this, we have designed a BMI system that reads out mental planning activity and issues commands in a proactive manner. To demonstrate this, we recorded brain activity from freely-moving monkeys performing an instructed task and decoded it with an energy-efficient, small and mobile field-programmable gate array hardware decoder triggering real-time action execution on smart devices. Core of this is an adaptive decoding algorithm that can compensate for the day-by-day neuronal signal fluctuations with minimal re-calibration effort. We show that open-loop planning-ahead control is possible using signals from primary and pre-motor areas leading to significant time-gain in the execution of action sequences. This novel approach provides, thus, a stepping stone towards improved and more humane control of different smart environments with mobile brain machine interfaces.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Animais , Encéfalo/fisiologia , Macaca mulatta
2.
Comput Biol Med ; 175: 108504, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701593

RESUMO

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Redes Neurais de Computação , Humanos , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Imaginação/fisiologia , Aprendizado Profundo
3.
Sci Rep ; 14(1): 11491, 2024 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-38769115

RESUMO

Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model's performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% ( p > 0.05 ; d = 0.07 ) . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.


Assuntos
Interfaces Cérebro-Computador , Eletrocorticografia , Fala , Humanos , Feminino , Masculino , Adulto , Fala/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem , Estudos de Viabilidade , Epilepsia/fisiopatologia , Redes Neurais de Computação , Pessoa de Meia-Idade , Adolescente
4.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732846

RESUMO

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Gestos , Humanos , Eletroencefalografia/métodos , Face/fisiologia , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Encéfalo/fisiologia , Masculino
5.
Commun Biol ; 7(1): 595, 2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38762683

RESUMO

Dynamic mode (DM) decomposition decomposes spatiotemporal signals into basic oscillatory components (DMs). DMs can improve the accuracy of neural decoding when used with the nonlinear Grassmann kernel, compared to conventional power features. However, such kernel-based machine learning algorithms have three limitations: large computational time preventing real-time application, incompatibility with non-kernel algorithms, and low interpretability. Here, we propose a mapping function corresponding to the Grassmann kernel that explicitly transforms DMs into spatial DM (sDM) features, which can be used in any machine learning algorithm. Using electrocorticographic signals recorded during various movement and visual perception tasks, the sDM features were shown to improve the decoding accuracy and computational time compared to conventional methods. Furthermore, the components of the sDM features informative for decoding showed similar characteristics to the high-γ power of the signals, but with higher trial-to-trial reproducibility. The proposed sDM features enable fast, accurate, and interpretable neural decoding.


Assuntos
Eletrocorticografia , Eletrocorticografia/métodos , Humanos , Algoritmos , Processamento de Sinais Assistido por Computador , Masculino , Aprendizado de Máquina , Percepção Visual/fisiologia , Feminino , Reprodutibilidade dos Testes , Adulto , Interfaces Cérebro-Computador
6.
J Neural Eng ; 21(3)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38757187

RESUMO

Objective.Aiming for the research on the brain-computer interface (BCI), it is crucial to design a MI-EEG recognition model, possessing a high classification accuracy and strong generalization ability, and not relying on a large number of labeled training samples.Approach.In this paper, we propose a self-supervised MI-EEG recognition method based on self-supervised learning with one-dimensional multi-task convolutional neural networks and long short-term memory (1-D MTCNN-LSTM). The model is divided into two stages: signal transform identification stage and pattern recognition stage. In the signal transform recognition phase, the signal transform dataset is recognized by the upstream 1-D MTCNN-LSTM network model. Subsequently, the backbone network from the signal transform identification phase is transferred to the pattern recognition phase. Then, it is fine-tuned using a trace amount of labeled data to finally obtain the motion recognition model.Main results.The upstream stage of this study achieves more than 95% recognition accuracy for EEG signal transforms, up to 100%. For MI-EEG pattern recognition, the model obtained recognition accuracies of 82.04% and 87.14% with F1 scores of 0.7856 and 0.839 on the datasets of BCIC-IV-2b and BCIC-IV-2a.Significance.The improved accuracy proves the superiority of the proposed method. It is prospected to be a method for accurate classification of MI-EEG in the BCI system.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Imaginação , Redes Neurais de Computação , Eletroencefalografia/métodos , Humanos , Imaginação/fisiologia , Aprendizado de Máquina Supervisionado , Reconhecimento Automatizado de Padrão/métodos
7.
J Neural Eng ; 21(3)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38722315

RESUMO

Objective.Electroencephalography (EEG) has been widely used in motor imagery (MI) research by virtue of its high temporal resolution and low cost, but its low spatial resolution is still a major criticism. The EEG source localization (ESL) algorithm effectively improves the spatial resolution of the signal by inverting the scalp EEG to extrapolate the cortical source signal, thus enhancing the classification accuracy.Approach.To address the problem of poor spatial resolution of EEG signals, this paper proposed a sub-band source chaotic entropy feature extraction method based on sub-band ESL. Firstly, the preprocessed EEG signals were filtered into 8 sub-bands. Each sub-band signal was source localized respectively to reveal the activation patterns of specific frequency bands of the EEG signals and the activities of specific brain regions in the MI task. Then, approximate entropy, fuzzy entropy and permutation entropy were extracted from the source signal as features to quantify the complexity and randomness of the signal. Finally, the classification of different MI tasks was achieved using support vector machine.Main result.The proposed method was validated on two MI public datasets (brain-computer interface (BCI) competition III IVa, BCI competition IV 2a) and the results showed that the classification accuracies were higher than the existing methods.Significance.The spatial resolution of the signal was improved by sub-band EEG localization in the paper, which provided a new idea for EEG MI research.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Entropia , Imaginação , Eletroencefalografia/métodos , Humanos , Imaginação/fisiologia , Dinâmica não Linear , Algoritmos , Máquina de Vetores de Suporte , Movimento/fisiologia , Reprodutibilidade dos Testes
8.
J Neural Eng ; 21(3)2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38718788

RESUMO

Objective.The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact.Approach.We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms.Results.Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture.Significance.Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for electroencephalogram motor imagery decoding within BCIs.


Assuntos
Atenção , Interfaces Cérebro-Computador , Eletroencefalografia , Imaginação , Eletroencefalografia/métodos , Humanos , Imaginação/fisiologia , Atenção/fisiologia , Movimento/fisiologia
9.
J Neural Eng ; 21(2)2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38565100

RESUMO

Objective. The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals.Approach. To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network's ability to extract deep features from original signals without relying on the true labels of the data.Main results. To evaluate our framework's efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32%, 82.34%, and 81.13%on the three datasets, demonstrating superior performance compared to existing methods.Significance. Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.


Assuntos
Interfaces Cérebro-Computador , Aprendizagem , Eletroencefalografia , Imagens, Psicoterapia , Redes Neurais de Computação , Algoritmos
10.
J Neural Eng ; 21(2)2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38592090

RESUMO

Objective.The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient optimization. In this paper, an improved extended infomax algorithm is presented that converges much faster.Approach.Accelerated convergence is achieved by replacing the natural gradient learning rule of extended infomax by a fully-multiplicative orthogonal-group based update scheme of the ICA unmixing matrix, leading to an orthogonal extended infomax algorithm (OgExtInf). The computational performance of OgExtInf was compared with original extended infomax and with two fast ICA algorithms: the popular FastICA and Picard, a preconditioned limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm belonging to the family of quasi-Newton methods.Main results.OgExtInf converges much faster than original extended infomax. For small-size electroencephalogram (EEG) data segments, as used for example in online EEG processing, OgExtInf is also faster than FastICA and Picard.Significance.OgExtInf may be useful for fast and reliable ICA, e.g. in online systems for epileptic spike and seizure detection or brain-computer interfaces.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia , Aprendizagem , Distribuição Normal
11.
Artigo em Inglês | MEDLINE | ID: mdl-38598402

RESUMO

Canonical correlation analysis (CCA), Multivariate synchronization index (MSI), and their extended methods have been widely used for target recognition in Brain-computer interfaces (BCIs) based on Steady State Visual Evoked Potentials (SSVEP), and covariance calculation is an important process for these algorithms. Some studies have proved that embedding time-local information into the covariance can optimize the recognition effect of the above algorithms. However, the optimization effect can only be observed from the recognition results and the improvement principle of time-local information cannot be explained. Therefore, we propose a time-local weighted transformation (TT) recognition framework that directly embeds the time-local information into the electroencephalography signal through weighted transformation. The influence mechanism of time-local information on the SSVEP signal can then be observed in the frequency domain. Low-frequency noise is suppressed on the premise of sacrificing part of the SSVEP fundamental frequency energy, the harmonic energy of SSVEP is enhanced at the cost of introducing a small amount of high-frequency noise. The experimental results show that the TT recognition framework can significantly improve the recognition ability of the algorithms and the separability of extracted features. Its enhancement effect is significantly better than the traditional time-local covariance extraction method, which has enormous application potential.


Assuntos
Interfaces Cérebro-Computador , Humanos , Potenciais Evocados Visuais , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Psicológico , Eletroencefalografia/métodos , Algoritmos , Estimulação Luminosa
12.
Artigo em Inglês | MEDLINE | ID: mdl-38598403

RESUMO

Steady-state visual evoked potential (SSVEP), one of the most popular electroencephalography (EEG)-based brain-computer interface (BCI) paradigms, can achieve high performance using calibration-based recognition algorithms. As calibration-based recognition algorithms are time-consuming to collect calibration data, the least-squares transformation (LST) has been used to reduce the calibration effort for SSVEP-based BCI. However, the transformation matrices constructed by current LST methods are not precise enough, resulting in large differences between the transformed data and the real data of the target subject. This ultimately leads to the constructed spatial filters and reference templates not being effective enough. To address these issues, this paper proposes multi-stimulus LST with online adaptation scheme (ms-LST-OA). METHODS: The proposed ms-LST-OA consists of two parts. Firstly, to improve the precision of the transformation matrices, we propose the multi-stimulus LST (ms-LST) using cross-stimulus learning scheme as the cross-subject data transformation method. The ms-LST uses the data from neighboring stimuli to construct a higher precision transformation matrix for each stimulus to reduce the differences between transformed data and real data. Secondly, to further optimize the constructed spatial filters and reference templates, we use an online adaptation scheme to learn more features of the EEG signals of the target subject through an iterative process trial-by-trial. RESULTS: ms-LST-OA performance was measured for three datasets (Benchmark Dataset, BETA Dataset, and UCSD Dataset). Using few calibration data, the ITR of ms-LST-OA achieved 210.01±10.10 bits/min, 172.31±7.26 bits/min, and 139.04±14.90 bits/min for all three datasets, respectively. CONCLUSION: Using ms-LST-OA can reduce calibration effort for SSVEP-based BCIs.


Assuntos
Interfaces Cérebro-Computador , Potenciais Evocados Visuais , Humanos , Calibragem , Estimulação Luminosa/métodos , Eletroencefalografia/métodos , Algoritmos
13.
Artigo em Inglês | MEDLINE | ID: mdl-38578854

RESUMO

Predicting the potential for recovery of motor function in stroke patients who undergo specific rehabilitation treatments is an important and major challenge. Recently, electroencephalography (EEG) has shown potential in helping to determine the relationship between cortical neural activity and motor recovery. EEG recorded in different states could more accurately predict motor recovery than single-state recordings. Here, we design a multi-state (combining eyes closed, EC, and eyes open, EO) fusion neural network for predicting the motor recovery of patients with stroke after EEG-brain-computer-interface (BCI) rehabilitation training and use an explainable deep learning method to identify the most important features of EEG power spectral density and functional connectivity contributing to prediction. The prediction accuracy of the multi-states fusion network was 82%, significantly improved compared with a single-state model. The neural network explanation result demonstrated the important region and frequency oscillation bands. Specifically, in those two states, power spectral density and functional connectivity were shown as the regions and bands related to motor recovery in frontal, central, and occipital. Moreover, the motor recovery relation in bands, the power spectrum density shows the bands at delta and alpha bands. The functional connectivity shows the delta, theta, and alpha bands in the EC state; delta, theta, and beta mid at the EO state are related to motor recovery. Multi-state fusion neural networks, which combine multiple states of EEG signals into a single network, can increase the accuracy of predicting motor recovery after BCI training, and reveal the underlying mechanisms of motor recovery in brain activity.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Humanos , Eletroencefalografia/métodos , Reabilitação do Acidente Vascular Cerebral/métodos
14.
Sci Adv ; 10(15): eadm8246, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38608024

RESUMO

Temporally coordinated neural activity is central to nervous system function and purposeful behavior. Still, there is a paucity of evidence demonstrating how this coordinated activity within cortical and subcortical regions governs behavior. We investigated this between the primary motor (M1) and contralateral cerebellar cortex as rats learned a neuroprosthetic/brain-machine interface (BMI) task. In neuroprosthetic task, actuator movements are causally linked to M1 "direct" neurons that drive the decoder for successful task execution. However, it is unknown how task-related M1 activity interacts with the cerebellum. We observed a notable 3 to 6 hertz coherence that emerged between these regions' local field potentials (LFPs) with learning that also modulated task-related spiking. We identified robust task-related indirect modulation in the cerebellum, which developed a preferential relationship with M1 task-related activity. Inhibiting cerebellar cortical and deep nuclei activity through optogenetics led to performance impairments in M1-driven neuroprosthetic control. Together, these results demonstrate that cerebellar influence is necessary for M1-driven neuroprosthetic control.


Assuntos
Interfaces Cérebro-Computador , Cerebelo , Animais , Ratos , Núcleo Celular , Aprendizagem , Movimento
15.
Sensors (Basel) ; 24(7)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38610540

RESUMO

In the field of neuroscience, brain-computer interfaces (BCIs) are used to connect the human brain with external devices, providing insights into the neural mechanisms underlying cognitive processes, including aesthetic perception. Non-invasive BCIs, such as EEG and fNIRS, are critical for studying central nervous system activity and understanding how individuals with cognitive deficits process and respond to aesthetic stimuli. This study assessed twenty participants who were divided into control and impaired aging (AI) groups based on MMSE scores. EEG and fNIRS were used to measure their neurophysiological responses to aesthetic stimuli that varied in pleasantness and dynamism. Significant differences were identified between the groups in P300 amplitude and late positive potential (LPP), with controls showing greater reactivity. AI subjects showed an increase in oxyhemoglobin in response to pleasurable stimuli, suggesting hemodynamic compensation. This study highlights the effectiveness of multimodal BCIs in identifying the neural basis of aesthetic appreciation and impaired aging. Despite its limitations, such as sample size and the subjective nature of aesthetic appreciation, this research lays the groundwork for cognitive rehabilitation tailored to aesthetic perception, improving the comprehension of cognitive disorders through integrated BCI methodologies.


Assuntos
Interfaces Cérebro-Computador , Humanos , Envelhecimento , Encéfalo , Estética , Percepção
16.
J Neural Eng ; 21(2)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38579696

RESUMO

Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.


Assuntos
Interfaces Cérebro-Computador , Neurociências , Humanos , Redes Neurais de Computação
17.
J Neural Eng ; 21(3)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38648783

RESUMO

Objective. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding.Approach. We intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space.Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other.Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation.


Assuntos
Interfaces Cérebro-Computador , Aprendizado de Máquina , Neurônios , Fala , Tálamo , Humanos , Neurônios/fisiologia , Masculino , Feminino , Pessoa de Meia-Idade , Fala/fisiologia , Adulto , Tálamo/fisiologia , Estimulação Encefálica Profunda/métodos , Idoso , Percepção da Fala/fisiologia
18.
Neural Netw ; 175: 106313, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38640695

RESUMO

The cortically-coupled target recognition system based on rapid serial visual presentation (RSVP) has a wide range of applications in brain computer interface (BCI) fields such as medical and military. However, in the complex natural environment backgrounds, the identification of event-related potentials (ERP) of both small and similar objects that are quickly presented is a research challenge. Therefore, we designed corresponding experimental paradigms and proposed a multi-band task related components matching (MTRCM) method to improve the rapid cognitive decoding of both small and similar objects. We compared the areas under the receiver operating characteristic curve (AUC) between MTRCM and other 9 methods under different numbers of training sample using RSVP-ERP data from 50 subjects. The results showed that MTRCM maintained an overall superiority and achieved the highest average AUC (0.6562 ± 0.0091). We also optimized the frequency band and the time parameters of the method. The verification on public data sets further showed the necessity of designing MTRCM method. The MTRCM method provides a new approach for neural decoding of both small and similar RSVP objects, which is conducive to promote the further development of RSVP-BCI.


Assuntos
Interfaces Cérebro-Computador , Cognição , Eletroencefalografia , Potenciais Evocados , Humanos , Eletroencefalografia/métodos , Cognição/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Potenciais Evocados/fisiologia , Estimulação Luminosa/métodos , Encéfalo/fisiologia
19.
J Neural Eng ; 21(3)2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38648781

RESUMO

Objective.Invasive brain-computer interfaces (BCIs) are promising communication devices for severely paralyzed patients. Recent advances in intracranial electroencephalography (iEEG) coupled with natural language processing have enhanced communication speed and accuracy. It should be noted that such a speech BCI uses signals from the motor cortex. However, BCIs based on motor cortical activities may experience signal deterioration in users with motor cortical degenerative diseases such as amyotrophic lateral sclerosis. An alternative approach to using iEEG of the motor cortex is necessary to support patients with such conditions.Approach. In this study, a multimodal embedding of text and images was used to decode visual semantic information from iEEG signals of the visual cortex to generate text and images. We used contrastive language-image pretraining (CLIP) embedding to represent images presented to 17 patients implanted with electrodes in the occipital and temporal cortices. A CLIP image vector was inferred from the high-γpower of the iEEG signals recorded while viewing the images.Main results.Text was generated by CLIPCAP from the inferred CLIP vector with better-than-chance accuracy. Then, an image was created from the generated text using StableDiffusion with significant accuracy.Significance.The text and images generated from iEEG through the CLIP embedding vector can be used for improved communication.


Assuntos
Interfaces Cérebro-Computador , Eletrocorticografia , Humanos , Masculino , Feminino , Eletrocorticografia/métodos , Adulto , Eletroencefalografia/métodos , Pessoa de Meia-Idade , Eletrodos Implantados , Adulto Jovem , Estimulação Luminosa/métodos
20.
Sci Rep ; 14(1): 9221, 2024 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649681

RESUMO

Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Pupila , Realidade Virtual , Humanos , Eletroencefalografia/métodos , Adulto , Masculino , Pupila/fisiologia , Feminino , Adulto Jovem , Simulação por Computador , Encéfalo/fisiologia , Frequência Cardíaca/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA