Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 3 de 3
1.
Heliyon ; 10(5): e26521, 2024 Mar 15.
Article En | MEDLINE | ID: mdl-38463871

Background and objective: The brain-computer interface (BCI) system based on steady-state visual evoked potentials (SSVEP) is expected to help disabled patients achieve alternative prosthetic hand assistance. However, the existing study still has some shortcomings in interaction aspects such as stimulus paradigm and control logic. The purpose of this study is to innovate the visual stimulus paradigm and asynchronous decoding/control strategy by integrating augmented reality technology, and propose an asynchronous pattern recognition algorithm, thereby improving the interaction logic and practical application capabilities of the prosthetic hand with the BCI system. Methods: An asynchronous visual stimulus paradigm based on an augmented reality (AR) interface was proposed in this paper, in which there were 8 control modes, including Grasp, Put down, Pinch, Point, Fist, Palm push, Hold pen, and Initial. According to the attentional orienting characteristics of the paradigm, a novel asynchronous pattern recognition algorithm that combines center extended canonical correlation analysis and support vector machine (Center-ECCA-SVM) was proposed. Then, this study proposed an intelligent BCI system switch based on a deep learning object detection algorithm (YOLOv4) to improve the level of user interaction. Finally, two experiments were designed to test the performance of the brain-controlled prosthetic hand system and its practical performance in real scenarios. Results: Under the AR paradigm of this study, compared with the liquid crystal display (LCD) paradigm, the average SSVEP spectrum amplitude of multiple subjects increased by 17.41%, and the signal-noise ratio (SNR) increased by 3.52%. The average stimulus pattern recognition accuracy was 96.71 ± 3.91%, which was 2.62% higher than the LCD paradigm. Under the data analysis time of 2s, the Center-ECCA-SVM classifier obtained 94.66 ± 3.87% and 97.40 ± 2.78% asynchronous pattern recognition accuracy under the Normal metric and the Tolerant metric, respectively. And the YOLOv4-tiny model achieves a speed of 25.29fps and a 96.4% confidence in the prosthetic hand in real-time detection. Finally, the brain-controlled prosthetic hand helped the subjects to complete 4 kinds of daily life tasks in the real scene, and the time-consuming were all within an acceptable range, which verified the effectiveness and practicability of the system. Conclusion: This research is based on improving the user interaction level of the prosthetic hand with the BCI system, and has made improvements in the SSVEP paradigm, asynchronous pattern recognition, interaction, and control logic. Furthermore, it also provides support for BCI areas for alternative prosthetic control, and movement disorder rehabilitation programs.

2.
Chem Sci ; 14(27): 7564-7568, 2023 Jul 12.
Article En | MEDLINE | ID: mdl-37449077

A palladium-catalyzed intramolecular asymmetric hydrocyclopropanylation of alkynes via C(sp3)-H activation has been developed for the synthesis of cyclopropane-fused γ-lactams. The presented strategy proceeds in a selective and 100% atom-economical manner. A range of cyclopropane-fused γ-lactams were prepared from readily available substrates in good yields and enantioselectivities with a chiral phosphoramidite ligand.

3.
Comput Methods Programs Biomed ; 197: 105721, 2020 Dec.
Article En | MEDLINE | ID: mdl-32882593

BACKGROUND AND OBJECTIVE: Surface electromyography (sEMG) has been used for robotic rehabilitation engineering for volitional control of hand prostheses or elbow exoskeleton, however, using sEMG for volitional control of an upper limb exoskeleton has not been perfectly developed. The long-term goal of our study is to process shoulder muscle bio-electrical signals for rehabilitative robotic assistive device motion control. The purposes of this study included: 1) to test the feasibility of machine learning algorithms in shoulder motion pattern recognition using sEMG signals from shoulder and upper limb muscles, 2) to investigate the influence of motion speed, individual variability, EMG recording device, and the amount of EMG datasets on the shoulder motion pattern recognition accuracy. METHODS: A novel convolutional neural network (CNN) structure was constructed to process EMG signals from 12 muscles for the pattern recognition of upper arm motions including resting, drinking, backward-forward motion, and abduction motion. The accuracy of the CNN models for pattern recognition under different motion speeds, among individuals, and by EMG recording devices was statistically analyzed using ANOVA, GLM Univariate analysis, and Chi-square tests. The influence of EMG dataset number used for CNN model training on recognition accuracy was studied by gradually increasing dataset number until the highest accuracy was obtained. RESULTS: Results showed that the accuracy of the normal speed CNN model in motion pattern recognition was 97.57% for normal speed motions and 97.07% for fast speed motions. The accuracy of the cross-subjects CNN model in motion pattern recognition was 79.64%. The accuracy of the cross-device CNN model in motion pattern recognition was 88.93% for normal speed motion and 80.87% for mixed speed. There was a statistical difference in pattern recognition accuracy between different CNN models. CONCLUSION: The EMG signals of shoulder and upper arm muscles from the upper limb motions can be processed using CNN algorithms to recognize the identical motions of the upper limb including drinking, forward/backward, abduction, and resting. A simple CNN model trained by EMG datasets of a designated motion speed accurately detected the motion patterns of the same motion speed, yielding the highest accuracy compared with other mixed CNN models for various speeds of motion pattern recognition. Increase of the number of EMG datasets for CNN model training improved the pattern recognition accuracy.


Algorithms , Shoulder , Electromyography , Hand , Humans , Machine Learning , Movement
...