Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
J Neural Eng ; 21(3)2024 May 07.
Article in English | MEDLINE | ID: mdl-38639058

ABSTRACT

Objective.Brain-computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (⩾100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly.Approach.In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets.Main results.Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively.Significance.This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Visual , Photic Stimulation , Humans , Evoked Potentials, Visual/physiology , Electroencephalography/methods , Male , Photic Stimulation/methods , Female , Adult , Young Adult , Algorithms
2.
Article in English | MEDLINE | ID: mdl-38265910

ABSTRACT

Electroencephalography (EEG) datasets are characterized by low signal-to-noise signals and unquantifiable noisy labels, which hinder the classification performance in rapid serial visual presentation (RSVP) tasks. Previous approaches primarily relied on supervised learning (SL), which may result in overfitting and reduced generalization performance. In this paper, we propose a novel multi-task collaborative network (MTCN) that integrates both SL and self-supervised learning (SSL) to extract more generalized EEG representations. The original SL task, i.e., the RSVP EEG classification task, is used to capture initial representations and establish classification thresholds for targets and non-targets. Two SSL tasks, including the masked temporal/spatial recognition task, are designed to enhance temporal dynamics extraction and capture the inherent spatial relationships among brain regions, respectively. The MTCN simultaneously learns from multiple tasks to derive a comprehensive representation that captures the essence of all tasks, thus mitigating the risk of overfitting and enhancing generalization performance. Moreover, to facilitate collaboration between SL and SSL, MTCN explicitly decomposes features into task-specific features and task-shared features, leveraging both label information with SL and feature information with SSL. Experiments conducted on THU, CAS, and GIST datasets illustrate the significant advantages of learning more generalized features in RSVP tasks. Our code is publicly accessible at https://github.com/Tammie-Li/MTCN.


Subject(s)
Electroencephalography , Generalization, Psychological , Humans , Recognition, Psychology , Supervised Machine Learning
3.
Article in English | MEDLINE | ID: mdl-38133973

ABSTRACT

Predicting cognitive load is a crucial issue in the emerging field of human-computer interaction and holds significant practical value, particularly in flight scenarios. Although previous studies have realized efficient cognitive load classification, new research is still needed to adapt the current state-of-the-art multimodal fusion methods. Here, we proposed a feature selection framework based on multiview learning to address the challenges of information redundancy and reveal the common physiological mechanisms underlying cognitive load. Specifically, the multimodal signal features (EEG, EDA, ECG, EOG, & eye movements) at three cognitive load levels were estimated during multiattribute task battery (MATB) tasks performed by 22 healthy participants and fed into a feature selection-multiview classification with cohesion and diversity (FS-MCCD) framework. The optimized feature set was extracted from the original feature set by integrating the weight of each view and the feature weights to formulate the ranking criteria. The cognitive load prediction model, evaluated using real-time classification results, achieved an average accuracy of 81.08% and an average F1-score of 80.94% for three-class classification among 22 participants. Furthermore, the weights of the physiological signal features revealed the physiological mechanisms related to cognitive load. Specifically, heightened cognitive load was linked to amplified δ and θ power in the frontal lobe, reduced α power in the parietal lobe, and an increase in pupil diameter. Thus, the proposed multimodal feature fusion framework emphasizes the effectiveness and efficiency of using these features to predict cognitive load.

4.
Biomed Eng Online ; 22(1): 65, 2023 Jul 01.
Article in English | MEDLINE | ID: mdl-37393355

ABSTRACT

BACKGROUND: Current research related to electroencephalogram (EEG)-based driver's emergency braking intention detection focuses on recognizing emergency braking from normal driving, with little attention to differentiating emergency braking from normal braking. Moreover, the classification algorithms used are mainly traditional machine learning methods, and the inputs to the algorithms are manually extracted features. METHODS: To this end, a novel EEG-based driver's emergency braking intention detection strategy is proposed in this paper. The experiment was conducted on a simulated driving platform with three different scenarios: normal driving, normal braking and emergency braking. We compared and analyzed the EEG feature maps of the two braking modes, and explored the use of traditional methods, Riemannian geometry-based methods, and deep learning-based methods to predict the emergency braking intention, all using the raw EEG signals rather than manually extracted features as input. RESULTS: We recruited 10 subjects for the experiment and used the area under the receiver operating characteristic curve (AUC) and F1 score as evaluation metrics. The results showed that both the Riemannian geometry-based method and the deep learning-based method outperform the traditional method. At 200 ms before the start of real braking, the AUC and F1 score of the deep learning-based EEGNet algorithm were 0.94 and 0.65 for emergency braking vs. normal driving, and 0.91 and 0.85 for emergency braking vs. normal braking, respectively. The EEG feature maps also showed a significant difference between emergency braking and normal braking. Overall, based on EEG signals, it was feasible to detect emergency braking from normal driving and normal braking. CONCLUSIONS: The study provides a user-centered framework for human-vehicle co-driving. If the driver's intention to brake in an emergency can be accurately identified, the vehicle's automatic braking system can be activated hundreds of milliseconds earlier than the driver's real braking action, potentially avoiding some serious collisions.


Subject(s)
Electroencephalography , Intention , Humans , Algorithms , Machine Learning , ROC Curve
5.
Front Neurorobot ; 17: 1089270, 2023.
Article in English | MEDLINE | ID: mdl-36960195

ABSTRACT

Reinforcement learning (RL) empowers the agent to learn robotic manipulation skills autonomously. Compared with traditional single-goal RL, semantic-goal-conditioned RL expands the agent capacity to accomplish multiple semantic manipulation instructions. However, due to sparsely distributed semantic goals and sparse-reward agent-environment interactions, the hard exploration problem arises and impedes the agent training process. In traditional RL, curiosity-motivated exploration shows effectiveness in solving the hard exploration problem. However, in semantic-goal-conditioned RL, the performance of previous curiosity-motivated methods deteriorates, which we propose is because of their two defects: uncontrollability and distraction. To solve these defects, we propose a conservative curiosity-motivated method named mutual information motivation with hybrid policy mechanism (MIHM). MIHM mainly contributes two innovations: the decoupled-mutual-information-based intrinsic motivation, which prevents the agent from being motivated to explore dangerous states by uncontrollable curiosity; the precisely trained and automatically switched hybrid policy mechanism, which eliminates the distraction from the curiosity-motivated policy and achieves the optimal utilization of exploration and exploitation. Compared with four state-of-the-art curiosity-motivated methods in the sparse-reward robotic manipulation task with 35 valid semantic goals, including stacks of 2 or 3 objects and pyramids, our MIHM shows the fastest learning speed. Moreover, MIHM achieves the highest 0.9 total success rate, which is up to 0.6 in other methods. Throughout all the baseline methods, our MIHM is the only one that achieves to stack three objects.

6.
Brain Sci ; 13(2)2023 Feb 05.
Article in English | MEDLINE | ID: mdl-36831811

ABSTRACT

Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.

7.
Cereb Cortex ; 33(7): 3575-3590, 2023 03 21.
Article in English | MEDLINE | ID: mdl-35965076

ABSTRACT

Brain cartography has expanded substantially over the past decade. In this regard, resting-state functional connectivity (FC) plays a key role in identifying the locations of putative functional borders. However, scant attention has been paid to the dynamic nature of functional interactions in the human brain. Indeed, FC is typically assumed to be stationary across time, which may obscure potential or subtle functional boundaries, particularly in regions with high flexibility and adaptability. In this study, we developed a dynamic FC (dFC)-based parcellation framework, established a new functional human brain atlas termed D-BFA (DFC-based Brain Functional Atlas), and verified its neurophysiological plausibility by stereo-EEG data. As the first dFC-based whole-brain atlas, the proposed D-BFA delineates finer functional boundaries that cannot be captured by static FC, and is further supported by good correspondence with cytoarchitectonic areas and task activation maps. Moreover, the D-BFA reveals the spatial distribution of dynamic variability across the brain and generates more homogenous parcels compared with most alternative parcellations. Our results demonstrate the superiority and practicability of dFC in brain parcellation, providing a new template to exploit brain topographic organization from a dynamic perspective. The D-BFA will be publicly available for download at https://github.com/sliderplm/D-BFA-618.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods
8.
Commun Biol ; 5(1): 1083, 2022 10 11.
Article in English | MEDLINE | ID: mdl-36220938

ABSTRACT

The human cerebral cortex is vastly expanded relative to nonhuman primates and rodents, leading to a functional orderly topography of brain networks. Here, we show that functional topography may be associated with gene expression heterogeneity. The neocortex exhibits greater heterogeneity in gene expression, with a lower expression of housekeeping genes, a longer mean path length, fewer clusters, and a lower degree of ordering in networks than archicortical and subcortical areas in human, rhesus macaque, and mouse brains. In particular, the cerebellar cortex displays greater heterogeneity in gene expression than cerebellar deep nuclei in the human brain, but not in the mouse brain, corresponding to the emergence of novel functions in the human cerebellar cortex. Moreover, the cortical areas with greater heterogeneity, primarily located in the multimodal association cortex, tend to express genes with higher evolutionary rates and exhibit a higher degree of functional connectivity measured by resting-state fMRI, implying that such a spatial distribution of gene expression may be shaped by evolution and is favourable for the specialization of higher cognitive functions. Together, the cross-species imaging and genetic findings may provide convergent evidence to support the association between the orderly topography of brain function networks and gene expression.


Subject(s)
Brain Mapping , Neocortex , Animals , Brain Mapping/methods , Gene Expression , Humans , Macaca mulatta , Magnetic Resonance Imaging/methods , Mice
9.
Brain Sci ; 12(9)2022 Aug 29.
Article in English | MEDLINE | ID: mdl-36138888

ABSTRACT

Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user's mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI performance and how to reduce these effects. We defined the low-mental-energy (LME) condition as a combined condition of decreased mental energy and lack of sleep. We used a long period of work (>=18 h) to induce the LME condition, and then P300- and SSVEP-based BCI tasks were conducted in LME or normal conditions. Ten subjects were recruited in this study. Each subject participated in the LME- and normal-condition experiments within one week. For the P300-based BCI, we used two decoding algorithms: stepwise linear discriminant (SWLDA) and least square regression (LSR). For the SSVEP-based BCI, we used two decoding algorithms: canonical correlation analysis (CCA) and filter bank canonical correlation analysis (FBCCA). Accuracy and information transfer rate (ITR) were used as performance metrics. The experimental results showed that for the P300-based BCI, the average accuracy was reduced by approximately 35% (with a SWLDA classifier) and approximately 40% (with a LSR classifier); the average ITR was reduced by approximately 6 bits/min (with a SWLDA classifier) and approximately 7 bits/min (with an LSR classifier). For the SSVEP-based BCI, the average accuracy was reduced by approximately 40% (with a CCA classifier) and approximately 40% (with a FBCCA classifier); the average ITR was reduced by approximately 20 bits/min (with a CCA classifier) and approximately 19 bits/min (with a FBCCA classifier). Additionally, the amplitude and signal-to-noise ratio of the evoked electroencephalogram signals were lower in the LME condition, while the degree of fatigue and the task load of each subject were higher. Further experiments suggested that increasing stimulus size, flash duration, and flash number could improve BCI performance in LME conditions to some extent. Our experiments showed that the LME condition reduced BCI performance, the effects of LME on BCI did not rely on specific BCI types and specific decoding algorithms, and optimizing BCI parameters (e.g., stimulus size) can reduce these effects.

10.
Biomed Eng Online ; 21(1): 50, 2022 Jul 26.
Article in English | MEDLINE | ID: mdl-35883092

ABSTRACT

BACKGROUND: Brain-controlled wheelchairs (BCWs) are important applications of brain-computer interfaces (BCIs). Currently, most BCWs are semiautomatic. When users want to reach a target of interest in their immediate environment, this semiautomatic interaction strategy is slow. METHODS: To this end, we combined computer vision (CV) and augmented reality (AR) with a BCW and proposed the CVAR-BCW: a BCW with a novel automatic interaction strategy. The proposed CVAR-BCW uses a translucent head-mounted display (HMD) as the user interface, uses CV to automatically detect environments, and shows the detected targets through AR technology. Once a user has chosen a target, the CVAR-BCW can automatically navigate to it. For a few scenarios, the semiautomatic strategy might be useful. We integrated a semiautomatic interaction framework into the CVAR-BCW. The user can switch between the automatic and semiautomatic strategies. RESULTS: We recruited 20 non-disabled subjects for this study and used the accuracy, information transfer rate (ITR), and average time required for the CVAR-BCW to reach each designated target as performance metrics. The experimental results showed that our CVAR-BCW performed well in indoor environments: the average accuracies across all subjects were 83.6% (automatic) and 84.1% (semiautomatic), the average ITRs were 8.2 bits/min (automatic) and 8.3 bits/min (semiautomatic), the average times required to reach a target were 42.4 s (automatic) and 93.4 s (semiautomatic), and the average workloads and degrees of fatigue for the two strategies were both approximately 20. CONCLUSIONS: Our CVAR-BCW provides a user-centric interaction approach and a good framework for integrating more advanced artificial intelligence technologies, which may be useful in the field of disability assistance.


Subject(s)
Augmented Reality , Brain-Computer Interfaces , Wheelchairs , Artificial Intelligence , Brain , Computers , Electroencephalography , Humans
11.
Chaos ; 32(5): 053109, 2022 May.
Article in English | MEDLINE | ID: mdl-35649971

ABSTRACT

Multiplex networks have attracted more and more attention because they can model the coupling of network nodes between layers more accurately. The interaction of nodes between layers makes the attack effect on multiplex networks not simply a linear superposition of the attack effect on single-layer networks, and the disintegration of multiplex networks has become a research hotspot and difficult. Traditional multiplex network disintegration methods generally adopt approximate and heuristic strategies. However, these two methods have a number of drawbacks and fail to meet our requirements in terms of effectiveness and timeliness. In this paper, we develop a novel deep learning framework, called MINER (Multiplex network disintegration strategy Inference based on deep NEtwork Representation learning), which transforms the disintegration strategy inference of multiplex networks into the encoding and decoding process based on deep network representation learning. In the encoding process, the attention mechanism encodes the coupling relationship of corresponding nodes between layers, and reinforcement learning is adopted to evaluate the disintegration action in the decoding process. Experiments indicate that the trained MINER model can be directly transferred and applied to the disintegration of multiplex networks with different scales. We extend it to scenarios that consider node attack cost constraints and also achieve excellent performance. This framework provides a new way to understand and employ multiplex networks.

12.
Brain Sci ; 10(8)2020 Aug 06.
Article in English | MEDLINE | ID: mdl-32781712

ABSTRACT

To date, traditional visual-based event-related potential brain-computer interface (ERP-BCI) systems continue to dominate the mainstream BCI research. However, these conventional BCIs are unsuitable for the individuals who have partly or completely lost their vision. Considering the poor performance of gaze independent ERP-BCIs, it is necessary to study techniques to improve the performance of these BCI systems. In this paper, we developed a novel 36-class bimodal ERP-BCI system based on tactile and auditory stimuli, in which six-virtual-direction audio files produced via head related transfer functions (HRTF) were delivered through headphones and location-congruent electro-tactile stimuli were simultaneously delivered to the corresponding position using electrodes placed on the abdomen and waist. We selected the eight best channels, trained a Bayesian linear discriminant analysis (BLDA) classifier and acquired the optimal trial number for target selection in online process. The average online information transfer rate (ITR) of the bimodal ERP-BCI reached 11.66 bit/min, improvements of 35.11% and 36.69% compared to the auditory (8.63 bit/min) and tactile approaches (8.53 bit/min), respectively. The results demonstrate the performance of the bimodal system is superior to each unimodal system. These facts indicate that the proposed bimodal system has potential utility as a gaze-independent BCI in future real-world applications.

13.
Comput Biol Med ; 118: 103618, 2020 03.
Article in English | MEDLINE | ID: mdl-32174331

ABSTRACT

This paper presents a self-paced brain-computer interface (BCI) based on the incorporation of an intelligent environment-understanding approach into a motor imagery (MI) BCI system for rehabilitation hospital environmental control. The interface integrates four types of daily assistance tasks: medical calls, service calls, appliance control and catering services. The system introduces intelligent environment understanding technology to establish preliminary predictions concerning a user's control intention by extracting potential operational objects in the current environment through an object detection neural network. According to the characteristics of the four types of control and services, we establish different response mechanisms and use an intelligent decision-making method to design and dynamically optimize the relevant control instruction set. The control feedback is communicated to the user via voice prompts; it avoids the use of visual channels throughout the interaction. The asynchronous and synchronous modes of the MI-BCI are designed to launch the control process and to select specific operations, respectively. In particular, the reliability of the MI-BCI is enhanced by the optimized identification algorithm. An online experiment demonstrated that the system can respond quickly and it generates an activation command in an average of 3.38s while effectively preventing false activations; the average accuracy of the BCI synchronization commands was 89.2%, which represents sufficiently effective control. The proposed system is efficient, applicable and can be used to both improve system information throughput and to reduce mental loads. The proposed system can be used to assist with the daily lives of patients with severe motor impairments.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Hospitals , Humans , Imagery, Psychotherapy , Reproducibility of Results
14.
IEEE Trans Neural Syst Rehabil Eng ; 27(6): 1292-1302, 2019 06.
Article in English | MEDLINE | ID: mdl-31071045

ABSTRACT

In this paper, we presented a novel asynchronous speller for Chinese sinogram input by incorporating electroencephalography (EOG) into the conventional electroencephalography (EEG)-based spelling paradigm. An EOG-based brain switch was used to activate a classic row-column P300-based speller only when spelling was needed, enabling an asynchronous operation of the system. Then, the user could input sinograms by alternately performing P300 and double-blink tasks until he or she intended to stop spelling. With the incorporation of an EOG detector, the system achieved rapid sinogram input. In addition to asynchronous operation, the performance of the proposed speller was compared with that achieved by a P300-based method alone across 18 subjects. The proposed system showed a mean communication speed of approximately 2.39 sinograms per minute, an increase of 0.83 sinograms per minute compared with the P300-based method. The preliminary online performance indicated that the proposed paradigm is a very promising approach for practical Chinese sinogram input application. This system may also be expanded to users whose languages are written in logographic scripts to serve as an assistive communication tool.


Subject(s)
Communication Aids for Disabled , Electroencephalography/methods , Electrooculography/methods , Reading , Adult , Algorithms , Asian People , Brain-Computer Interfaces , Equipment Design , Event-Related Potentials, P300 , Female , Healthy Volunteers , Humans , Male , Pilot Projects , Reproducibility of Results , Signal Processing, Computer-Assisted , Young Adult
15.
Sci Rep ; 9(1): 4472, 2019 03 14.
Article in English | MEDLINE | ID: mdl-30872723

ABSTRACT

Although the mechanisms of steady-state visual evoked potentials (SSVEPs) have been well studied, none of them have been implemented with strictly experimental conditions. Our objective was to create an ideal observer condition to exploit the features of SSVEPs. We present here an electroencephalographic (EEG) eye tracking experimental paradigm that provides biofeedback for gaze restriction during the visual stimulation. Specifically, we designed an EEG eye tracking synchronous data recording system for successful trial selection. Forty-six periodic flickers within a visual field of 11.5° were successively presented to evoke SSVEP responses, and online biofeedback based on an eye tracker was provided for gaze restriction. For eight participants, SSVEP responses in the visual field and topographic maps from full-brain EEG were plotted and analyzed. The experimental results indicated that the optimal visual flicking arrangement to boost SSVEPs should include the features of circular stimuli within a 4-6° spatial distance and increased stimulus area below the fixation point. These findings provide a basis for determining stimulus parameters for neural engineering studies, e.g. SSVEP-based brain-computer interface (BCI) designs. The proposed experimental paradigm could also provide a precise framework for future SSVEP-related studies.


Subject(s)
Electroencephalography/methods , Evoked Potentials, Visual , Retina/physiology , Adult , Brain-Computer Interfaces , Female , Humans , Male , Photic Stimulation , Young Adult
16.
IEEE Trans Biomed Eng ; 66(11): 3119-3128, 2019 11.
Article in English | MEDLINE | ID: mdl-30794504

ABSTRACT

OBJECTIVE: To introduce a novel event-related potential (ERP)-based brain-computer interface (BCI) paradigm with active mental tasks multiplying precise judgment and visual cognitive capacities and evaluate its performance. METHODS: This study employed a paradigm with three types of targets (true-, pseudo-, and non-), double flash codes, colors and color terms, and four test conditions. The primary hypothesis was that active mental tasks combining multiple cognitive capacities and clear judgment for different categories of stimuli increase the BCI performance and evoke stronger or specific ERPs. Classification methods were proposed and evaluated, and two were used in online experiments. RESULTS: The modes containing active mental tasks provided higher accuracy than the control mode (by up to 19.06%). The color-word matching mode had the highest judgment level and achieved the best performance. True-stimuli evoked strong P3b, while pseudotarget signals provided obvious N4, but the control mode seemed less sensitive to both of them. Different types of stimuli evoked distinctive N2 and P3a components. CONCLUSION: An appropriate boost in the judgment level using multiple stimuli and cognitive approaches could be investigated to improve the BCI performance and evoke or enhance ERPs. Utilizing active mental tasks may be a promising way to promote BCIs. SIGNIFICANCE: Active mental tasks combining multiple cognitive capacities and precise judgments were adopted in an ERP-based BCI. Color and color words were introduced as stimuli to construct an alternative paradigm, and the judgment levels of different conditions were calculated. High accuracies and the participants' preferences were obtained, which may promote the effective use of BCIs.


Subject(s)
Brain-Computer Interfaces , Cognition/physiology , Evoked Potentials/physiology , Adult , Electroencephalography , Female , Humans , Male , Models, Neurological , Signal Processing, Computer-Assisted , Young Adult
17.
IEEE Trans Neural Syst Rehabil Eng ; 26(12): 2367-2375, 2018 12.
Article in English | MEDLINE | ID: mdl-30442610

ABSTRACT

In this paper, an asynchronous control paradigm based on sequential motor imagery (sMI) is proposed to enrich the control commands of a motor imagery -based brain-computer interface. We test the feasibility and report the performance of this paradigm in wheelchair navigation control. By sequentially imaging left- and right-hand movements, the subjects can complete four sMI tasks in an asynchronous mode that are then encoded to control six steering functions of a wheelchair, including moving forward, turning left, turning right, accelerating, decelerating, and stopping. Two experiments, a simulated experiment, and an online wheelchair navigation experiment, were conducted to evaluate the performance of the proposed approach in seven subjects. In summary, the subjects completed 99 of 105 experimental trials along a predefined route. The success rate was 94.2% indicating the practicality and the effectiveness of the proposed asynchronous control paradigm in wheelchair navigation control.


Subject(s)
Brain-Computer Interfaces , Imagination/physiology , Movement/physiology , Wheelchairs , Adult , Algorithms , Electroencephalography , Female , Functional Laterality , Healthy Volunteers , Humans , Male , Online Systems , Psychomotor Performance , Young Adult
18.
Biomed Eng Online ; 17(1): 111, 2018 Aug 20.
Article in English | MEDLINE | ID: mdl-30126416

ABSTRACT

BACKGROUND: Electroencephalogram-based brain-computer interfaces (BCIs) represent novel human machine interactive technology that allows people to communicate and interact with the external world without relying on their peripheral muscles and nervous system. Among BCI systems, brain-actuated wheelchairs are promising systems for the rehabilitation of severely motor disabled individuals who are unable to control a wheelchair by conventional interfaces. Previous related studies realized the easy use of brain-actuated wheelchairs that enable people to navigate the wheelchair through simple commands; however, these systems rely on offline calibration of the environment. Other systems do not rely on any prior knowledge; however, the control of the system is time consuming. In this paper, we have proposed an improved mobile platform structure equipped with an omnidirectional wheelchair, a lightweight robotic arm, a target recognition module and an auto-control module. Based on the you only look once (YOLO) algorithm, our system can, in real time, recognize and locate the targets in the environment, and the users confirm one target through a P300-based BCI. An expert system plans a proper solution for a specific target; for example, the planned solution for a door is opening the door and then passing through it, and the auto-control system then jointly controls the wheelchair and robotic arm to complete the operation. During the task execution, the target is also tracked by using an image tracking technique. Thus, we have formed an easy-to-use system that can provide accurate services to satisfy user requirements, and this system can accommodate different environments. RESULTS: To validate and evaluate our system, an experiment simulating the daily application was performed. The tasks included the user driving the system closer to a walking man and having a conversation with him; going to another room through a door; and picking up a bottle of water on the desk and drinking water. Three patients (cerebral infarction; spinal injury; and stroke) and four healthy subjects participated in the test and all completed the tasks. CONCLUSION: This article presents a brain-actuated smart wheelchair system. The system is intelligent in that it provides efficient and considerate services for users. To test the system, three patients and four healthy subjects were recruited to participate in a test. The results demonstrate that the system works smartly and efficiently; with this system, users only need to issue small commands to get considerate services. This system is of significance for accelerating the application of BCIs in the practical environment, especially for patients who will use a BCI for rehabilitation applications.


Subject(s)
Brain-Computer Interfaces , Wheelchairs , Cerebral Infarction , Electroencephalography , Humans , Spinal Cord Injuries , Stroke
19.
Sensors (Basel) ; 18(6)2018 May 23.
Article in English | MEDLINE | ID: mdl-29882852

ABSTRACT

The accurate angle measurement of objects outside the linear field of view (FOV) is a challenging task for a strapdown semi-active laser seeker and is not yet well resolved. Considering the fact that the strapdown semi-active laser seeker is equipped with GPS and an inertial navigation system (INS) on a missile, in this work, we present an angle measurement method based on the fusion of the seeker’s data and GPS and INS data for a strapdown semi-active laser seeker. When an object is in the nonlinear FOV or outside the FOV, by solving the problems of space consistency and time consistency, the pitch angle and yaw angle of the object can be calculated via the fusion of the last valid angles measured by the seeker and the corresponding GPS and INS data. The numerical simulation results demonstrate the correctness and effectiveness of the proposed method.

20.
Brain Res ; 1688: 22-32, 2018 06 01.
Article in English | MEDLINE | ID: mdl-29174693

ABSTRACT

Resting-state functional magnetic resonance imaging (fMRI) studies using static functional connectivity (sFC) measures have shown that the brain function is severely disrupted after long-term sleep deprivation (SD). However, increasing evidence has suggested that resting-state functional connectivity (FC) is dynamic and exhibits spontaneous fluctuation on a smaller timescale. The process by which long-term SD can influence dynamic functional connectivity (dFC) remains unclear. In this study, 37 healthy subjects participated in the SD experiment, and they were scanned both during rested wakefulness (RW) and after 36 h of SD. A sliding-window based approach and a spectral clustering algorithm were used to evaluate the effects of SD on dFC based on the 26 qualified subjects' data. The outcomes showed that time-averaging FC across specific regions as well as temporal properties of the FC states, such as the dwell time and transition probability, was strongly influenced after SD in contrast to the RW condition. Based on the occurrences of FC states, we further identified some RW-dominant states characterized by anti-correlation between the default mode network (DMN) and other cortices, and some SD-dominant states marked by significantly decreased thalamocortical connectivity. In particular, the temporal features of these FC states were negatively correlated with the correlation coefficients between the DMN and dorsal attention network (dATN) and demonstrated high potential in classification of sleep state (with 10-fold cross-validation accuracy of 88.6% for dwell time and 88.1% for transition probability). Collectively, our results suggested that the temporal properties of the FC states greatly account for changes in the resting-state brain networks following SD, which provides new insights into the impact of SD on the resting-state functional organization in the human brain.


Subject(s)
Brain/physiology , Sleep Deprivation , Adult , Brain Mapping , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Neural Pathways/physiopathology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL