RESUMO
Introduction: Intracortical Brain-Computer Interfaces (iBCI) establish a new pathway to restore motor functions in individuals with paralysis by interfacing directly with the brain to translate movement intention into action. However, the development of iBCI applications is hindered by the non-stationarity of neural signals induced by the recording degradation and neuronal property variance. Many iBCI decoders were developed to overcome this non-stationarity, but its effect on decoding performance remains largely unknown, posing a critical challenge for the practical application of iBCI. Methods: To improve our understanding on the effect of non-stationarity, we conducted a 2D-cursor simulation study to examine the influence of various types of non-stationarities. Concentrating on spike signal changes in chronic intracortical recording, we used the following three metrics to simulate the non-stationarity: mean firing rate (MFR), number of isolated units (NIU), and neural preferred directions (PDs). MFR and NIU were decreased to simulate the recording degradation while PDs were changed to simulate the neuronal property variance. Performance evaluation based on simulation data was then conducted on three decoders and two different training schemes. Optimal Linear Estimation (OLE), Kalman Filter (KF), and Recurrent Neural Network (RNN) were implemented as decoders and trained using static and retrained schemes. Results: In our evaluation, RNN decoder and retrained scheme showed consistent better performance under small recording degradation. However, the serious signal degradation would cause significant performance to drop eventually. On the other hand, RNN performs significantly better than the other two decoders in decoding simulated non-stationary spike signals, and the retrained scheme maintains the decoders' high performance when changes are limited to PDs. Discussion: Our simulation work demonstrates the effects of neural signal non-stationarity on decoding performance and serves as a reference for selecting decoders and training schemes in chronic iBCI. Our result suggests that comparing to KF and OLE, RNN has better or equivalent performance using both training schemes. Performance of decoders under static scheme is influenced by recording degradation and neuronal property variation while decoders under retrained scheme are only influenced by the former one.
RESUMO
OBJECTIVE: As the scale of neural recording increases, Brain-computer interfaces (BCIs) are restrained by high-dimensional neural features, so dimensionality reduction is required as a preprocess of neural features. In this context, we propose a novel framework based on deep learning to reduce the dimensionality of neural features that are typically extracted from electrocorticography (ECoG) or local field potential (LFP). APPROACH: A high-performance autoencoder was implemented by chaining convolutional layers to deal with spatial and frequency dimensions with bottleneck long short-term memory (LSTM) layers to deal with the temporal dimension of the features. Furthermore, this autoencoder is combined with a fully connected layer to regularize the training. MAIN RESULTS: By applying the proposed method to two different datasets, we found that this dimensionality reduction method largely outperforms kernel principal component analysis (KPCA), partial least square (PLS), preferential subspace identification (PSID), and latent factor analysis via dynamical systems (LFADS). Besides, the new features obtained by our method can be applied to various BCI decoders, without significant differences in decoding performance. SIGNIFICANCE: A novel method is proposed as a reliable tool for efficient dimensionality reduction of neural signals. Its high performance and robustness are promising to enhance the decoding accuracy and long-term stability of online BCI systems based on large-scale neural recordings.
Assuntos
Interfaces Cérebro-Computador , Eletrocorticografia , Eletroencefalografia , Análise dos Mínimos QuadradosRESUMO
Local field potentials (LFPs) have better long-term stability compared with spikes in brain-machine interfaces (BMIs). Many studies have shown promising results of LFP decoding, but the high-dimensional feature of LFP still hurdle the development of the BMIs to low-cost. In this paper, we proposed a framework of a 1D convolution neural network (CNN) to reduce the dimensionality of the LFP features. For evaluating the performance of this architecture, the reduced LFP features were decoded to cursor position (Center-out task) by a Kalman filter. The Principal components analysis (PCA) was also performed as a comparison. The results showed that the CNN model could reduce the dimensionality of LFP features to a smaller size without significant performance loss. The decoding result based on the CNN features outperformed that based on the PCA features. Moreover, the reduced features by CNN also showed robustness across different sessions. These results demonstrated that the LFP features reduced by the CNN model achieved low cost without sacrificing high-performance and robustness, suggesting that this method could be used for portable BMI systems in the future.
Assuntos
Interfaces Cérebro-Computador , Córtex Motor , Redes Neurais de Computação , Projetos Piloto , Análise de Componente PrincipalRESUMO
PURPOSE: Image-guided surgical navigation system (SNS) has proved to be an increasingly important assistance tool for mini-invasive surgery. However, using standard devices such as keyboard and mouse as human-computer interaction (HCI) is a latent vector of infectious medium, causing risks to patients and surgeons. To solve the human-computer interaction problem, we proposed an optimized structure of LSTM based on a depth camera to recognize gestures and applied it to an in-house oral and maxillofacial surgical navigation system (Qin et al. in Int J Comput Assist Radiol Surg 14(2):281-289, 2019). METHODS: The proposed optimized structure of LSTM named multi-LSTM allows multiple input layers and takes into account the relationships between inputs. To combine the gesture recognition with the SNS, four left-hand signs waving along four directions were designed to correspond to four operations of the mouse, and the motion of right hand was used to control the movement of the cursor. Finally, a phantom study for zygomatic implant placement was conducted to evaluate the feasibility of multi-LSTM as HCI. RESULTS: 3D hand trajectories of both wrist and elbow from 10 participants were collected to train the recognition network. Then tenfold cross-validation was performed for judging signs, and the mean accuracy was 96% ± 3%. In the phantom study, four implants were successfully placed, and the average deviations of planned-placed implants were 1.22 mm and 1.70 mm for the entry and end points, respectively, while the angular deviation ranged from 0.4° to 2.9°. CONCLUSION: The results showed that this non-contact user interface based on multi-LSTM could be used as a promising tool to eliminate the disinfection problem in operation room and alleviate manipulation complexity of surgical navigation system.
Assuntos
Implantação de Prótese/métodos , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador , Gestos , Humanos , Modelos Teóricos , Salas Cirúrgicas , Imagens de Fantasmas , Próteses e Implantes , Zigoma/cirurgiaRESUMO
Reaching and grasping are highly-coupled movements, and their underlying neural dynamics have been widely studied in the last decade. To distinguish reaching and grasping encodings, it is essential to present different object identities independent of their positions. Presented here is the design of an automatic apparatus that is assembled with a turning table and three-dimensional (3D) translational device to achieve this goal. The turning table switches different objects corresponding to different grip types while the 3D translational device transports the turning table in 3D space. Both are driven independently by motors so that the target position and object are combined arbitrarily. Meanwhile, wrist trajectory and grip types are recorded via the motion capture system and touch sensors, respectively. Furthermore, representative results that demonstrate successfully trained monkey using this system are described. It is expected that this apparatus will facilitate researchers to study kinematics, neural principles, and brain-machine interfaces related to upper limb function.