Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Biomed Eng Online ; 22(1): 65, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-37393355

RESUMEN

BACKGROUND: Current research related to electroencephalogram (EEG)-based driver's emergency braking intention detection focuses on recognizing emergency braking from normal driving, with little attention to differentiating emergency braking from normal braking. Moreover, the classification algorithms used are mainly traditional machine learning methods, and the inputs to the algorithms are manually extracted features. METHODS: To this end, a novel EEG-based driver's emergency braking intention detection strategy is proposed in this paper. The experiment was conducted on a simulated driving platform with three different scenarios: normal driving, normal braking and emergency braking. We compared and analyzed the EEG feature maps of the two braking modes, and explored the use of traditional methods, Riemannian geometry-based methods, and deep learning-based methods to predict the emergency braking intention, all using the raw EEG signals rather than manually extracted features as input. RESULTS: We recruited 10 subjects for the experiment and used the area under the receiver operating characteristic curve (AUC) and F1 score as evaluation metrics. The results showed that both the Riemannian geometry-based method and the deep learning-based method outperform the traditional method. At 200 ms before the start of real braking, the AUC and F1 score of the deep learning-based EEGNet algorithm were 0.94 and 0.65 for emergency braking vs. normal driving, and 0.91 and 0.85 for emergency braking vs. normal braking, respectively. The EEG feature maps also showed a significant difference between emergency braking and normal braking. Overall, based on EEG signals, it was feasible to detect emergency braking from normal driving and normal braking. CONCLUSIONS: The study provides a user-centered framework for human-vehicle co-driving. If the driver's intention to brake in an emergency can be accurately identified, the vehicle's automatic braking system can be activated hundreds of milliseconds earlier than the driver's real braking action, potentially avoiding some serious collisions.


Asunto(s)
Electroencefalografía , Intención , Humanos , Algoritmos , Aprendizaje Automático , Curva ROC
2.
Brain Sci ; 13(2)2023 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-36831811

RESUMEN

Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.

3.
J Adv Res ; 46: 189-197, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35872349

RESUMEN

INTRODUCTION: Image recognition technology has immense potential to be applied in industrial energy systems for energy conservation. However, the low recognition accuracy and generalization ability under actual operation conditions limit its commercial application. OBJECTIVES: To improve the recognition accuracy and generalization ability, a novel image recognition method integrating deep learning and domain knowledge was applied to assist energy saving and emission reduction for industrial energy systems. METHODS: As a typical industrial scenario, the defrosting control in the refrigeration system was selected as the specific optimization object. By combining deep learning algorithm with domain knowledge, a residual-based convolutional neural network model (RCNN) was proposed specifically for frosty state recognition, which features the residual input and average pooling output. Based on the real-time recognition of frosty levels, a defrosting control optimization method was proposed to initiate and terminate the defrosting operation on demand. RESULTS: By combining the advanced image recognition technique with specific energy domain knowledge, the proposed RCNN enables both high recognition accuracy and strong generalization ability. The recognition accuracy of RCNN reached 95.06% for the trained objects and 93.67% for non-trained objects while that of only 75.86% for the conventional CNN. By adopting the presented system optimization method assisted by RCNN, the defrosting frequency, accumulated time and energy consumption were 53.8%, 57.02% and 34.5% less than the original control method. Furthermore, the environmental and cost analysis illustrated that the annual reduction in CO2 emissions is 2145.21 to 3412.84 kg and the payback time was less than 2.5 years which was far below the service life. CONCLUSION: The technical feasibility and significant energy-saving benefits of deep learning-based image recognition method were demonstrated through the field experiment. Our study shows the great application potential of image recognition technology and promotes carbon neutrality in industrial energy systems.

4.
Brain Sci ; 12(9)2022 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-36138888

RESUMEN

Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user's mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI performance and how to reduce these effects. We defined the low-mental-energy (LME) condition as a combined condition of decreased mental energy and lack of sleep. We used a long period of work (>=18 h) to induce the LME condition, and then P300- and SSVEP-based BCI tasks were conducted in LME or normal conditions. Ten subjects were recruited in this study. Each subject participated in the LME- and normal-condition experiments within one week. For the P300-based BCI, we used two decoding algorithms: stepwise linear discriminant (SWLDA) and least square regression (LSR). For the SSVEP-based BCI, we used two decoding algorithms: canonical correlation analysis (CCA) and filter bank canonical correlation analysis (FBCCA). Accuracy and information transfer rate (ITR) were used as performance metrics. The experimental results showed that for the P300-based BCI, the average accuracy was reduced by approximately 35% (with a SWLDA classifier) and approximately 40% (with a LSR classifier); the average ITR was reduced by approximately 6 bits/min (with a SWLDA classifier) and approximately 7 bits/min (with an LSR classifier). For the SSVEP-based BCI, the average accuracy was reduced by approximately 40% (with a CCA classifier) and approximately 40% (with a FBCCA classifier); the average ITR was reduced by approximately 20 bits/min (with a CCA classifier) and approximately 19 bits/min (with a FBCCA classifier). Additionally, the amplitude and signal-to-noise ratio of the evoked electroencephalogram signals were lower in the LME condition, while the degree of fatigue and the task load of each subject were higher. Further experiments suggested that increasing stimulus size, flash duration, and flash number could improve BCI performance in LME conditions to some extent. Our experiments showed that the LME condition reduced BCI performance, the effects of LME on BCI did not rely on specific BCI types and specific decoding algorithms, and optimizing BCI parameters (e.g., stimulus size) can reduce these effects.

5.
Biomed Eng Online ; 21(1): 50, 2022 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-35883092

RESUMEN

BACKGROUND: Brain-controlled wheelchairs (BCWs) are important applications of brain-computer interfaces (BCIs). Currently, most BCWs are semiautomatic. When users want to reach a target of interest in their immediate environment, this semiautomatic interaction strategy is slow. METHODS: To this end, we combined computer vision (CV) and augmented reality (AR) with a BCW and proposed the CVAR-BCW: a BCW with a novel automatic interaction strategy. The proposed CVAR-BCW uses a translucent head-mounted display (HMD) as the user interface, uses CV to automatically detect environments, and shows the detected targets through AR technology. Once a user has chosen a target, the CVAR-BCW can automatically navigate to it. For a few scenarios, the semiautomatic strategy might be useful. We integrated a semiautomatic interaction framework into the CVAR-BCW. The user can switch between the automatic and semiautomatic strategies. RESULTS: We recruited 20 non-disabled subjects for this study and used the accuracy, information transfer rate (ITR), and average time required for the CVAR-BCW to reach each designated target as performance metrics. The experimental results showed that our CVAR-BCW performed well in indoor environments: the average accuracies across all subjects were 83.6% (automatic) and 84.1% (semiautomatic), the average ITRs were 8.2 bits/min (automatic) and 8.3 bits/min (semiautomatic), the average times required to reach a target were 42.4 s (automatic) and 93.4 s (semiautomatic), and the average workloads and degrees of fatigue for the two strategies were both approximately 20. CONCLUSIONS: Our CVAR-BCW provides a user-centric interaction approach and a good framework for integrating more advanced artificial intelligence technologies, which may be useful in the field of disability assistance.


Asunto(s)
Realidad Aumentada , Interfaces Cerebro-Computador , Silla de Ruedas , Inteligencia Artificial , Encéfalo , Computadores , Electroencefalografía , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA