Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Brain Res Bull ; 208: 110901, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38355058

ABSTRACT

Currently, most models rarely consider the negative transfer problem in the research field of cross-subject EEG emotion recognition. To solve this problem, this paper proposes a semi-supervised domain adaptive algorithm based on few labeled samples of target subject, which called multi-domain geodesic flow kernel dynamic distribution alignment (MGFKD). It consists of three modules: 1) GFK common feature extractor: projects the feature distribution of source and target subjects to the Grassmann manifold space, and obtains the latent common features of the two feature distributions through GFK method. 2) Source domain selector: obtains pseudo-labels of the target subject through weak classifier, finds "golden source subjects" by using few known labels of target subjects. 3) Label corrector: uses a dynamic distribution balance strategy to correct the pseudo-labels of the target subject. We conducted comparison experiments on the SEED and SEED-IV datasets, and the results show that MGFKD outperforms unsupervised and semi-supervised domain adaptation algorithms, achieving an average accuracy of 87.51±7.68% and 68.79±8.25% on the SEED and SEED-IV datasets with only one labeled sample per video for target subject. Especially when the number of source domains is set as 6 and the number of known labels is set as 5, the accuracy increase to 90.20±7.57% and 69.99±7.38%, respectively. The above results prove that our proposed algorithm can efficiently and quickly improve the cross-subject EEG emotion classification performance. Since it only need a small number of labeled samples of new subjects, making it has strong application value in future EEG-based emotion recognition applications.


Subject(s)
Algorithms , Emotions , Humans , Recognition, Psychology , Electroencephalography
2.
Article in English | MEDLINE | ID: mdl-37030737

ABSTRACT

Augmented reality-based brain-computer interface (AR-BCI) system is one of the important ways to promote BCI technology outside of the laboratory due to its portability and mobility, but its performance in real-world scenarios has not been fully studied. In the current study, we first investigated the effect of ambient brightness on AR-BCI performance. 5 different light intensities were set as experimental conditions to simulate typical brightness in real scenes, while the same steady-state visual evoked potentials (SSVEP) stimulus was displayed in the AR glass. The data analysis results showed that SSVEP can be evoked under all 5 light intensities, but the response intensity became weaker when the brightness increased. The recognition accuracies of AR-SSVEP were negatively correlated to light intensity, the highest accuracies were 89.35% with FBCCA and 83.33% with CCA under 0 lux light intensity, while they decreased to 62.53% and 49.24% under 1200 lux. To solve the accuracy loss problem in high ambient brightness, we further designed a SSVEP recognition algorithm with iterative learning capability, named ensemble online adaptive CCA (eOACCA). The main strategy is to provide initial filters for high-intensity data by iteratively learning low-light-intensity AR-SSVEP data. The experimental results showed that the eOACCA algorithm had significant advantages under higher light intensities ( 600 lux). Compared with FBCCA, the accuracy of eOACCA under 1200 lux was increased by 13.91%. In conclusion, the current study contributed to the in-depth understanding of the performance variations of AR-BCI under different lighting conditions, and was helpful in promoting the AR-BCI application in complex lighting environments.


Subject(s)
Augmented Reality , Brain-Computer Interfaces , Humans , Evoked Potentials, Visual , Photic Stimulation , Electroencephalography/methods , Recognition, Psychology , Algorithms
3.
Front Neurosci ; 17: 1129049, 2023.
Article in English | MEDLINE | ID: mdl-36908782

ABSTRACT

Motor imagery-based brain-computer interfaces (MI-BCI) have important application values in the field of neurorehabilitation and robot control. At present, MI-BCI mostly use bilateral upper limb motor tasks, but there are relatively few studies on single upper limb MI tasks. In this work, we conducted studies on the recognition of motor imagery EEG signals of the right upper limb and proposed a multi-branch fusion convolutional neural network (MF-CNN) for learning the features of the raw EEG signals as well as the two-dimensional time-frequency maps at the same time. The dataset used in this study contained three types of motor imagery tasks: extending the arm, rotating the wrist, and grasping the object, 25 subjects were included. In the binary classification experiment between the grasping object and the arm-extending tasks, MF-CNN achieved an average classification accuracy of 78.52% and kappa value of 0.57. When all three tasks were used for classification, the accuracy and kappa value were 57.06% and 0.36, respectively. The comparison results showed that the classification performance of MF-CNN is higher than that of single CNN branch algorithms in both binary-class and three-class classification. In conclusion, MF-CNN makes full use of the time-domain and frequency-domain features of EEG, can improve the decoding accuracy of single limb motor imagery tasks, and it contributes to the application of MI-BCI in motor function rehabilitation training after stroke.

4.
J Neural Eng ; 19(3)2022 05 13.
Article in English | MEDLINE | ID: mdl-35477130

ABSTRACT

Objective. The biggest advantage of steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) lies in its large command set and high information transfer rate (ITR). Almost all current SSVEP-BCIs use a computer screen (CS) to present flickering visual stimuli, which limits its flexible use in actual scenes. Augmented reality (AR) technology provides the ability to superimpose visual stimuli on the real world, and it considerably expands the application scenarios of SSVEP-BCI. However, whether the advantages of SSVEP-BCI can be maintained when moving the visual stimuli to AR glasses is not known. This study investigated the effects of the stimulus number for SSVEP-BCI in an AR context.Approach.We designed SSVEP flickering stimulation interfaces with four different numbers of stimulus targets and put them in AR glasses and a CS to display. Three common recognition algorithms were used to analyze the influence of the stimulus number and stimulation time on the recognition accuracy and ITR of AR-SSVEP and CS-SSVEP.Main results.The amplitude spectrum and signal-to-noise ratio of AR-SSVEP were not significantly different from CS-SSVEP at the fundamental frequency but were significantly lower than CS-SSVEP at the second harmonic. SSVEP recognition accuracy decreased as the stimulus number increased in AR-SSVEP but not in CS-SSVEP. When the stimulus number increased, the maximum ITR of CS-SSVEP also increased, but not for AR-SSVEP. When the stimulus number was 25, the maximum ITR (142.05 bits min-1) was reached at 400 ms. The importance of stimulation time in SSVEP was confirmed. When the stimulation time became longer, the recognition accuracy of both AR-SSVEP and CS-SSVEP increased. The peak value was reached at 3 s. The ITR increased first and then slowly decreased after reaching the peak value.Significance. Our study indicates that the conclusions based on CS-SSVEP cannot be simply applied to AR-SSVEP, and it is not advisable to set too many stimulus targets in the AR display device.


Subject(s)
Augmented Reality , Brain-Computer Interfaces , Retinal Diseases , Algorithms , Electroencephalography/methods , Evoked Potentials, Visual , Humans , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL