Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.979
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38732846

RESUMEN

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Gestos , Humanos , Electroencefalografía/métodos , Cara/fisiología , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Encéfalo/fisiología , Masculino
2.
Comput Biol Med ; 175: 108504, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38701593

RESUMEN

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Redes Neurales de la Computación , Humanos , Electroencefalografía/métodos , Procesamiento de Señales Asistido por Computador , Imaginación/fisiología , Aprendizaje Profundo
3.
Sci Rep ; 14(1): 11491, 2024 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-38769115

RESUMEN

Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model's performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% ( p > 0.05 ; d = 0.07 ) . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.


Asunto(s)
Interfaces Cerebro-Computador , Electrocorticografía , Habla , Humanos , Femenino , Masculino , Adulto , Habla/fisiología , Percepción del Habla/fisiología , Adulto Joven , Estudios de Factibilidad , Epilepsia/fisiopatología , Redes Neurales de la Computación , Persona de Mediana Edad , Adolescente
4.
Commun Biol ; 7(1): 595, 2024 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-38762683

RESUMEN

Dynamic mode (DM) decomposition decomposes spatiotemporal signals into basic oscillatory components (DMs). DMs can improve the accuracy of neural decoding when used with the nonlinear Grassmann kernel, compared to conventional power features. However, such kernel-based machine learning algorithms have three limitations: large computational time preventing real-time application, incompatibility with non-kernel algorithms, and low interpretability. Here, we propose a mapping function corresponding to the Grassmann kernel that explicitly transforms DMs into spatial DM (sDM) features, which can be used in any machine learning algorithm. Using electrocorticographic signals recorded during various movement and visual perception tasks, the sDM features were shown to improve the decoding accuracy and computational time compared to conventional methods. Furthermore, the components of the sDM features informative for decoding showed similar characteristics to the high-γ power of the signals, but with higher trial-to-trial reproducibility. The proposed sDM features enable fast, accurate, and interpretable neural decoding.


Asunto(s)
Electrocorticografía , Electrocorticografía/métodos , Humanos , Algoritmos , Procesamiento de Señales Asistido por Computador , Masculino , Aprendizaje Automático , Percepción Visual/fisiología , Femenino , Reproducibilidad de los Resultados , Adulto , Interfaces Cerebro-Computador
5.
J Neural Eng ; 21(3)2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38757187

RESUMEN

Objective.Aiming for the research on the brain-computer interface (BCI), it is crucial to design a MI-EEG recognition model, possessing a high classification accuracy and strong generalization ability, and not relying on a large number of labeled training samples.Approach.In this paper, we propose a self-supervised MI-EEG recognition method based on self-supervised learning with one-dimensional multi-task convolutional neural networks and long short-term memory (1-D MTCNN-LSTM). The model is divided into two stages: signal transform identification stage and pattern recognition stage. In the signal transform recognition phase, the signal transform dataset is recognized by the upstream 1-D MTCNN-LSTM network model. Subsequently, the backbone network from the signal transform identification phase is transferred to the pattern recognition phase. Then, it is fine-tuned using a trace amount of labeled data to finally obtain the motion recognition model.Main results.The upstream stage of this study achieves more than 95% recognition accuracy for EEG signal transforms, up to 100%. For MI-EEG pattern recognition, the model obtained recognition accuracies of 82.04% and 87.14% with F1 scores of 0.7856 and 0.839 on the datasets of BCIC-IV-2b and BCIC-IV-2a.Significance.The improved accuracy proves the superiority of the proposed method. It is prospected to be a method for accurate classification of MI-EEG in the BCI system.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Imaginación , Redes Neurales de la Computación , Electroencefalografía/métodos , Humanos , Imaginación/fisiología , Aprendizaje Automático Supervisado , Reconocimiento de Normas Patrones Automatizadas/métodos
6.
J Neural Eng ; 21(3)2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38722315

RESUMEN

Objective.Electroencephalography (EEG) has been widely used in motor imagery (MI) research by virtue of its high temporal resolution and low cost, but its low spatial resolution is still a major criticism. The EEG source localization (ESL) algorithm effectively improves the spatial resolution of the signal by inverting the scalp EEG to extrapolate the cortical source signal, thus enhancing the classification accuracy.Approach.To address the problem of poor spatial resolution of EEG signals, this paper proposed a sub-band source chaotic entropy feature extraction method based on sub-band ESL. Firstly, the preprocessed EEG signals were filtered into 8 sub-bands. Each sub-band signal was source localized respectively to reveal the activation patterns of specific frequency bands of the EEG signals and the activities of specific brain regions in the MI task. Then, approximate entropy, fuzzy entropy and permutation entropy were extracted from the source signal as features to quantify the complexity and randomness of the signal. Finally, the classification of different MI tasks was achieved using support vector machine.Main result.The proposed method was validated on two MI public datasets (brain-computer interface (BCI) competition III IVa, BCI competition IV 2a) and the results showed that the classification accuracies were higher than the existing methods.Significance.The spatial resolution of the signal was improved by sub-band EEG localization in the paper, which provided a new idea for EEG MI research.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Entropía , Imaginación , Electroencefalografía/métodos , Humanos , Imaginación/fisiología , Dinámicas no Lineales , Algoritmos , Máquina de Vectores de Soporte , Movimiento/fisiología , Reproducibilidad de los Resultados
7.
J Neural Eng ; 21(3)2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38718788

RESUMEN

Objective.The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact.Approach.We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms.Results.Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture.Significance.Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for electroencephalogram motor imagery decoding within BCIs.


Asunto(s)
Atención , Interfaces Cerebro-Computador , Electroencefalografía , Imaginación , Electroencefalografía/métodos , Humanos , Imaginación/fisiología , Atención/fisiología , Movimiento/fisiología
8.
Sci Rep ; 14(1): 11054, 2024 05 14.
Artículo en Inglés | MEDLINE | ID: mdl-38744976

RESUMEN

Brain machine interfaces (BMIs) can substantially improve the quality of life of elderly or disabled people. However, performing complex action sequences with a BMI system is onerous because it requires issuing commands sequentially. Fundamentally different from this, we have designed a BMI system that reads out mental planning activity and issues commands in a proactive manner. To demonstrate this, we recorded brain activity from freely-moving monkeys performing an instructed task and decoded it with an energy-efficient, small and mobile field-programmable gate array hardware decoder triggering real-time action execution on smart devices. Core of this is an adaptive decoding algorithm that can compensate for the day-by-day neuronal signal fluctuations with minimal re-calibration effort. We show that open-loop planning-ahead control is possible using signals from primary and pre-motor areas leading to significant time-gain in the execution of action sequences. This novel approach provides, thus, a stepping stone towards improved and more humane control of different smart environments with mobile brain machine interfaces.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Animales , Encéfalo/fisiología , Macaca mulatta
9.
Artículo en Inglés | MEDLINE | ID: mdl-38578854

RESUMEN

Predicting the potential for recovery of motor function in stroke patients who undergo specific rehabilitation treatments is an important and major challenge. Recently, electroencephalography (EEG) has shown potential in helping to determine the relationship between cortical neural activity and motor recovery. EEG recorded in different states could more accurately predict motor recovery than single-state recordings. Here, we design a multi-state (combining eyes closed, EC, and eyes open, EO) fusion neural network for predicting the motor recovery of patients with stroke after EEG-brain-computer-interface (BCI) rehabilitation training and use an explainable deep learning method to identify the most important features of EEG power spectral density and functional connectivity contributing to prediction. The prediction accuracy of the multi-states fusion network was 82%, significantly improved compared with a single-state model. The neural network explanation result demonstrated the important region and frequency oscillation bands. Specifically, in those two states, power spectral density and functional connectivity were shown as the regions and bands related to motor recovery in frontal, central, and occipital. Moreover, the motor recovery relation in bands, the power spectrum density shows the bands at delta and alpha bands. The functional connectivity shows the delta, theta, and alpha bands in the EC state; delta, theta, and beta mid at the EO state are related to motor recovery. Multi-state fusion neural networks, which combine multiple states of EEG signals into a single network, can increase the accuracy of predicting motor recovery after BCI training, and reveal the underlying mechanisms of motor recovery in brain activity.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Profundo , Rehabilitación de Accidente Cerebrovascular , Accidente Cerebrovascular , Humanos , Electroencefalografía/métodos , Rehabilitación de Accidente Cerebrovascular/métodos
10.
Artículo en Inglés | MEDLINE | ID: mdl-38598402

RESUMEN

Canonical correlation analysis (CCA), Multivariate synchronization index (MSI), and their extended methods have been widely used for target recognition in Brain-computer interfaces (BCIs) based on Steady State Visual Evoked Potentials (SSVEP), and covariance calculation is an important process for these algorithms. Some studies have proved that embedding time-local information into the covariance can optimize the recognition effect of the above algorithms. However, the optimization effect can only be observed from the recognition results and the improvement principle of time-local information cannot be explained. Therefore, we propose a time-local weighted transformation (TT) recognition framework that directly embeds the time-local information into the electroencephalography signal through weighted transformation. The influence mechanism of time-local information on the SSVEP signal can then be observed in the frequency domain. Low-frequency noise is suppressed on the premise of sacrificing part of the SSVEP fundamental frequency energy, the harmonic energy of SSVEP is enhanced at the cost of introducing a small amount of high-frequency noise. The experimental results show that the TT recognition framework can significantly improve the recognition ability of the algorithms and the separability of extracted features. Its enhancement effect is significantly better than the traditional time-local covariance extraction method, which has enormous application potential.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Potenciales Evocados Visuales , Reconocimiento de Normas Patrones Automatizadas/métodos , Reconocimiento en Psicología , Electroencefalografía/métodos , Algoritmos , Estimulación Luminosa
11.
Artículo en Inglés | MEDLINE | ID: mdl-38598403

RESUMEN

Steady-state visual evoked potential (SSVEP), one of the most popular electroencephalography (EEG)-based brain-computer interface (BCI) paradigms, can achieve high performance using calibration-based recognition algorithms. As calibration-based recognition algorithms are time-consuming to collect calibration data, the least-squares transformation (LST) has been used to reduce the calibration effort for SSVEP-based BCI. However, the transformation matrices constructed by current LST methods are not precise enough, resulting in large differences between the transformed data and the real data of the target subject. This ultimately leads to the constructed spatial filters and reference templates not being effective enough. To address these issues, this paper proposes multi-stimulus LST with online adaptation scheme (ms-LST-OA). METHODS: The proposed ms-LST-OA consists of two parts. Firstly, to improve the precision of the transformation matrices, we propose the multi-stimulus LST (ms-LST) using cross-stimulus learning scheme as the cross-subject data transformation method. The ms-LST uses the data from neighboring stimuli to construct a higher precision transformation matrix for each stimulus to reduce the differences between transformed data and real data. Secondly, to further optimize the constructed spatial filters and reference templates, we use an online adaptation scheme to learn more features of the EEG signals of the target subject through an iterative process trial-by-trial. RESULTS: ms-LST-OA performance was measured for three datasets (Benchmark Dataset, BETA Dataset, and UCSD Dataset). Using few calibration data, the ITR of ms-LST-OA achieved 210.01±10.10 bits/min, 172.31±7.26 bits/min, and 139.04±14.90 bits/min for all three datasets, respectively. CONCLUSION: Using ms-LST-OA can reduce calibration effort for SSVEP-based BCIs.


Asunto(s)
Interfaces Cerebro-Computador , Potenciales Evocados Visuales , Humanos , Calibración , Estimulación Luminosa/métodos , Electroencefalografía/métodos , Algoritmos
12.
J Neuroeng Rehabil ; 21(1): 48, 2024 04 05.
Artículo en Inglés | MEDLINE | ID: mdl-38581031

RESUMEN

BACKGROUND: This research focused on the development of a motor imagery (MI) based brain-machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will. METHODS: A total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants' neural activity using the second deep learning approach for the decoding. RESULTS: The three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance. CONCLUSION: This research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study's discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Profundo , Dispositivo Exoesqueleto , Humanos , Algoritmos , Extremidad Inferior , Electroencefalografía/métodos
13.
J Neural Eng ; 21(2)2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38592090

RESUMEN

Objective.The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient optimization. In this paper, an improved extended infomax algorithm is presented that converges much faster.Approach.Accelerated convergence is achieved by replacing the natural gradient learning rule of extended infomax by a fully-multiplicative orthogonal-group based update scheme of the ICA unmixing matrix, leading to an orthogonal extended infomax algorithm (OgExtInf). The computational performance of OgExtInf was compared with original extended infomax and with two fast ICA algorithms: the popular FastICA and Picard, a preconditioned limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm belonging to the family of quasi-Newton methods.Main results.OgExtInf converges much faster than original extended infomax. For small-size electroencephalogram (EEG) data segments, as used for example in online EEG processing, OgExtInf is also faster than FastICA and Picard.Significance.OgExtInf may be useful for fast and reliable ICA, e.g. in online systems for epileptic spike and seizure detection or brain-computer interfaces.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Electroencefalografía , Aprendizaje , Distribución Normal
14.
Artículo en Inglés | MEDLINE | ID: mdl-38619940

RESUMEN

Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) Multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) An improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) Invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Electroencefalografía , Emociones , Humanos , Emociones/fisiología , Electroencefalografía/métodos , Femenino , Masculino , Aprendizaje Automático , Artefactos , Adulto , Redes Neurales de la Computación
15.
Artículo en Inglés | MEDLINE | ID: mdl-38625770

RESUMEN

This study embarks on a comprehensive investigation of the effectiveness of repetitive transcranial direct current stimulation (tDCS)-based neuromodulation in augmenting steady-state visual evoked potential (SSVEP) brain-computer interfaces (BCIs), alongside exploring pertinent electroencephalography (EEG) biomarkers for assessing brain states and evaluating tDCS efficacy. EEG data were garnered across three distinct task modes (eyes open, eyes closed, and SSVEP stimulation) and two neuromodulation patterns (sham-tDCS and anodal-tDCS). Brain arousal and brain functional connectivity were measured by extracting features of fractal EEG and information flow gain, respectively. Anodal-tDCS led to diminished offsets and enhanced information flow gains, indicating improvements in both brain arousal and brain information transmission capacity. Additionally, anodal-tDCS markedly enhanced SSVEP-BCIs performance as evidenced by increased amplitudes and accuracies, whereas sham-tDCS exhibited lesser efficacy. This study proffers invaluable insights into the application of neuromodulation methods for bolstering BCI performance, and concurrently authenticates two potent electrophysiological markers for multifaceted characterization of brain states.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Potenciales Evocados Visuales , Fractales , Estimulación Transcraneal de Corriente Directa , Humanos , Estimulación Transcraneal de Corriente Directa/métodos , Potenciales Evocados Visuales/fisiología , Masculino , Adulto , Femenino , Adulto Joven , Nivel de Alerta/fisiología , Encéfalo/fisiología , Voluntarios Sanos , Algoritmos
16.
Sci Adv ; 10(15): eadm8246, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38608024

RESUMEN

Temporally coordinated neural activity is central to nervous system function and purposeful behavior. Still, there is a paucity of evidence demonstrating how this coordinated activity within cortical and subcortical regions governs behavior. We investigated this between the primary motor (M1) and contralateral cerebellar cortex as rats learned a neuroprosthetic/brain-machine interface (BMI) task. In neuroprosthetic task, actuator movements are causally linked to M1 "direct" neurons that drive the decoder for successful task execution. However, it is unknown how task-related M1 activity interacts with the cerebellum. We observed a notable 3 to 6 hertz coherence that emerged between these regions' local field potentials (LFPs) with learning that also modulated task-related spiking. We identified robust task-related indirect modulation in the cerebellum, which developed a preferential relationship with M1 task-related activity. Inhibiting cerebellar cortical and deep nuclei activity through optogenetics led to performance impairments in M1-driven neuroprosthetic control. Together, these results demonstrate that cerebellar influence is necessary for M1-driven neuroprosthetic control.


Asunto(s)
Interfaces Cerebro-Computador , Cerebelo , Animales , Ratas , Núcleo Celular , Aprendizaje , Movimiento
17.
Sensors (Basel) ; 24(7)2024 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-38610540

RESUMEN

In the field of neuroscience, brain-computer interfaces (BCIs) are used to connect the human brain with external devices, providing insights into the neural mechanisms underlying cognitive processes, including aesthetic perception. Non-invasive BCIs, such as EEG and fNIRS, are critical for studying central nervous system activity and understanding how individuals with cognitive deficits process and respond to aesthetic stimuli. This study assessed twenty participants who were divided into control and impaired aging (AI) groups based on MMSE scores. EEG and fNIRS were used to measure their neurophysiological responses to aesthetic stimuli that varied in pleasantness and dynamism. Significant differences were identified between the groups in P300 amplitude and late positive potential (LPP), with controls showing greater reactivity. AI subjects showed an increase in oxyhemoglobin in response to pleasurable stimuli, suggesting hemodynamic compensation. This study highlights the effectiveness of multimodal BCIs in identifying the neural basis of aesthetic appreciation and impaired aging. Despite its limitations, such as sample size and the subjective nature of aesthetic appreciation, this research lays the groundwork for cognitive rehabilitation tailored to aesthetic perception, improving the comprehension of cognitive disorders through integrated BCI methodologies.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Envejecimiento , Encéfalo , Estética , Percepción
18.
J Neural Eng ; 21(2)2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38579696

RESUMEN

Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.


Asunto(s)
Interfaces Cerebro-Computador , Neurociencias , Humanos , Redes Neurales de la Computación
19.
Sci Rep ; 14(1): 9221, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38649681

RESUMEN

Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Pupila , Realidad Virtual , Humanos , Electroencefalografía/métodos , Adulto , Masculino , Pupila/fisiología , Femenino , Adulto Joven , Simulación por Computador , Encéfalo/fisiología , Frecuencia Cardíaca/fisiología
20.
Sci Rep ; 14(1): 9281, 2024 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654008

RESUMEN

Steady-state visual evoked potentials (SSVEP) are electroencephalographic signals elicited when the brain is exposed to a visual stimulus with a steady frequency. We analyzed the temporal dynamics of SSVEP during sustained flicker stimulation at 5, 10, 15, 20 and 40 Hz. We found that the amplitudes of the responses were not stable over time. For a 5 Hz stimulus, the responses progressively increased, while, for higher flicker frequencies, the amplitude increased during the first few seconds and often showed a continuous decline afterward. We hypothesize that these two distinct sets of frequency-dependent SSVEP signal properties reflect the contribution of parvocellular and magnocellular visual pathways generating sustained and transient responses, respectively. These results may have important applications for SSVEP signals used in research and brain-computer interface technology and may contribute to a better understanding of the frequency-dependent temporal mechanisms involved in the processing of prolonged periodic visual stimuli.


Asunto(s)
Electroencefalografía , Potenciales Evocados Visuales , Estimulación Luminosa , Potenciales Evocados Visuales/fisiología , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Interfaces Cerebro-Computador , Corteza Visual/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...