RESUMEN
Frequent listening to unfamiliar music excerpts forms functional connectivity in the brain as music becomes familiar and memorable. However, where these connections spectrally arise in the cerebral cortex during music familiarization has yet to be determined. This study investigates electrophysiological changes in phase-based functional connectivity recorded with electroencephalography (EEG) from twenty participants' brains during thrice passive listening to initially unknown classical music excerpts. Functional connectivity is evaluated based on measuring phase synchronization between all pairwise combinations of EEG electrodes across all repetitions via repeated measures ANOVA and between every two repetitions of listening to unknown music with the weighted phase lag index (WPLI) method in different frequency bands. The results indicate an increased phase synchronization during gradual short-term familiarization between the right frontal and the right parietal areas in the theta and alpha bands. In addition, the increased phase synchronization is discovered between the right temporal areas and the right parietal areas at the theta band during gradual music familiarization. Overall, this study explores the short-term music familiarization effects on neural responses by revealing that repetitions form phasic coupling in the theta and alpha bands in the right hemisphere during passive listening.
Asunto(s)
Ritmo alfa , Percepción Auditiva , Electroencefalografía , Lóbulo Frontal , Música , Lóbulo Parietal , Ritmo Teta , Humanos , Masculino , Femenino , Ritmo alfa/fisiología , Adulto Joven , Lóbulo Parietal/fisiología , Ritmo Teta/fisiología , Adulto , Percepción Auditiva/fisiología , Lóbulo Frontal/fisiología , Electroencefalografía/métodos , Lóbulo Temporal/fisiología , Reconocimiento en Psicología/fisiología , Estimulación Acústica/métodosRESUMEN
BACKGROUND: Tremors are involuntary rhythmic movements commonly present in neurological diseases such as Parkinson's disease, essential tremor, and multiple sclerosis. Intention tremor is a subtype associated with lesions in the cerebellum and its connected pathways, and it is a common symptom in diseases associated with cerebellar pathology. While clinicians traditionally use tests to identify tremor type and severity, recent advancements in wearable technology have provided quantifiable ways to measure movement and tremor using motion capture systems, app-based tasks and tools, and physiology-based measurements. However, quantifying intention tremor remains challenging due to its changing nature. METHODOLOGY & RESULTS: This review examines the current state of upper limb tremor assessment technology and discusses potential directions to further develop new and existing algorithms and sensors to better quantify tremor, specifically intention tremor. A comprehensive search using PubMed and Scopus was performed using keywords related to technologies for tremor assessment. Afterward, screened results were filtered for relevance and eligibility and further classified into technology type. A total of 243 publications were selected for this review and classified according to their type: body function level: movement-based, activity level: task and tool-based, and physiology-based. Furthermore, each publication's methods, purpose, and technology are summarized in the appendix table. CONCLUSIONS: Our survey suggests a need for more targeted tasks to evaluate intention tremors, including digitized tasks related to intentional movements, neurological and physiological measurements targeting the cerebellum and its pathways, and signal processing techniques that differentiate voluntary from involuntary movement in motion capture systems.
Asunto(s)
Temblor , Dispositivos Electrónicos Vestibles , Humanos , Temblor Esencial/diagnóstico , Movimiento/fisiología , Enfermedad de Parkinson/complicaciones , Enfermedad de Parkinson/diagnóstico , Temblor/diagnóstico , Extremidad SuperiorRESUMEN
How the brain responds temporally and spectrally when we listen to familiar versus unfamiliar musical sequences remains unclear. This study uses EEG techniques to investigate the continuous electrophysiological changes in the human brain during passive listening to familiar and unfamiliar musical excerpts. EEG activity was recorded in 20 participants while they passively listened to 10 s of classical music, and they were then asked to indicate their self-assessment of familiarity. We analyzed the EEG data in two manners: familiarity based on the within-subject design, i.e., averaging trials for each condition and participant, and familiarity based on the same music excerpt, i.e., averaging trials for each condition and music excerpt. By comparing the familiar condition with the unfamiliar condition and the local baseline, sustained low-beta power (12-16 Hz) suppression was observed in both analyses in fronto-central and left frontal electrodes after 800 ms. However, sustained alpha power (8-12 Hz) decreased in fronto-central and posterior electrodes after 850 ms only in the first type of analysis. Our study indicates that listening to familiar music elicits a late sustained spectral response (inhibition of alpha/low-beta power from 800 ms to 10 s). Moreover, the results showed that alpha suppression reflects increased attention or arousal/engagement due to listening to familiar music; nevertheless, low-beta suppression exhibits the effect of familiarity.NEW & NOTEWORTHY This study differentiates the dynamic temporal-spectral effects during listening to 10 s of familiar music compared with unfamiliar music. This study highlights that listening to familiar music leads to continuous suppression in the alpha and low-beta bands. This suppression starts â¼800 ms after the stimulus onset.
Asunto(s)
Música , Humanos , Electroencefalografía/métodos , Encéfalo/fisiología , Percepción Auditiva/fisiología , Reconocimiento en Psicología/fisiologíaRESUMEN
Soft exosuits offer promise to support users in everyday workload tasks by providing assistance. However, acceptance of such systems remains low due to the difficulty of control compared with rigid mechatronic systems. Recently, there has been progress in developing control schemes for soft exosuits that move in line with user intentions. While initial results have demonstrated sufficient device performance, the assessment of user experience via the cognitive response has yet to be evaluated. To address this, we propose a soft pneumatic elbow exosuit designed based on our previous work to provide assistance in line with user expectations utilizing two existing state-of-the-art control methods consisting of a gravity compensation and myoprocessor based on muscle activation. A user experience study was conducted to assess whether the device moves naturally with user expectations and the potential for device acceptance by determining when the exosuit violated user expectations through the neuro-cognitive and motor response. Brain activity from electroencephalography (EEG) data revealed that subjects elicited error-related potentials (ErrPs) in response to unexpected exosuit actions, which were decodable across both control schemes with an average accuracy of 76.63 ± 1.73% across subjects. Additionally, unexpected exosuit actions were further decoded via the motor response from electromyography (EMG) and kinematic data with a grand average accuracy of 68.73 ± 6.83% and 77.52 ± 3.79% respectively. This work demonstrates the validation of existing state-of-the-art control schemes for soft wearable exosuits through the proposed soft pneumatic elbow exosuit. We demonstrate the feasibility of assessing device performance with respect to the cognitive response through decoding when the device violates user expectations in order to help understand and promote device acceptance.
Asunto(s)
Dispositivo Exoesqueleto , Robótica , Humanos , Codo , Fenómenos Biomecánicos , CogniciónRESUMEN
Attentional orienting towards others' gaze direction or pointing has been well investigated in laboratory conditions. However, less is known about the operation of attentional mechanisms in online naturalistic social interaction scenarios. It is equally plausible that following social directional cues (gaze, pointing) occurs reflexively, and/or that it is influenced by top-down cognitive factors. In a mobile eye-tracking experiment, we show that under natural interaction conditions, overt attentional orienting is not necessarily reflexively triggered by pointing gestures or a combination of gaze shifts and pointing gestures. We found that participants conversing with an experimenter, who, during the interaction, would play out pointing gestures as well as directional gaze movements, continued to mostly focus their gaze on the face of the experimenter, demonstrating the significance of attending to the face of the interaction partner-in line with effective top-down control over reflexive orienting of attention in the direction of social cues.
Asunto(s)
Atención/fisiología , Señales (Psicología) , Cara , Gestos , Orientación Espacial/fisiología , Adulto , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto JovenRESUMEN
For spatiotemporal learning with neural networks, hyperparameters are often set manually by a human expert. This is especially the case with multiple timescale networks that require a careful setting of the values of timescales in order to learn spatiotemporal data. However, this implies a cumbersome trial-and-error process until suitable parameters are found and it reduces the long-term autonomy of artificial agents, such as robots that are controlled by multiple timescale networks. To solve the problem, we propose the evolutionary optimized multiple timescale recurrent neural network (EO-MTRNN) that is inspired by the neural plasticity of the human cortex. Our proposed network uses a method of evolutionary optimization to adjust its timescales and to rewire itself in terms of number of neurons and synapses. Moreover, it does not require additional neural networks for pre- and postprocessing input-output data. We validate our EO-MTRNN by applying it to a proposed benchmark training dataset with single and multiple sequence training cases, as well as by applying it to sensory-motor data from a robot. We compare different configuration modes of the network, and we compare the learning performance between a network configuration with manually set hyperparameters and a configuration with automatically estimated hyperparameters. The results show that automatically estimated hyperparameters yield approximately 43% better performance than manually estimated ones, without overfitting the given teaching data. We also validate the generalization ability by successfully learning data that were not included in the hyperparameter estimation process.
Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Desempeño Psicomotor/fisiología , Robótica/métodos , Aprendizaje Espacial/fisiología , Percepción del Tiempo/fisiología , Bases de Datos Factuales , Humanos , Factores de TiempoRESUMEN
The sense of touch enables us to safely interact and control our contacts with our surroundings. Many technical systems and applications could profit from a similar type of sense. Yet, despite the emergence of e-skin systems covering more extensive areas, large-area realizations of e-skin effectively boosting applications are still rare. Recent advancements have improved the deployability and robustness of e-skin systems laying the basis for their scalability. However, the upscaling of e-skin systems introduces yet another challenge-the challenge of handling a large amount of heterogeneous tactile information with complex spatial relations between sensing points. We targeted this challenge and proposed an event-driven approach for large-area skin systems. While our previous works focused on the implementation and the experimental validation of the approach, this work now provides the consolidated foundations for realizing, designing, and understanding large-area event-driven e-skin systems for effective applications. This work homogenizes the different perspectives on event-driven systems and assesses the applicability of existing event-driven implementations in large-area skin systems. Additionally, we provide novel guidelines for tuning the novelty-threshold of event generators. Overall, this work develops a systematic approach towards realizing a flexible event-driven information handling system on standard computer systems for large-scale e-skin with detailed descriptions on the effective design of event generators and decoders. All designs and guidelines are validated by outlining their impacts on our implementations, and by consolidating various experimental results. The resulting system design for e-skin systems is scalable, efficient, flexible, and capable of handling large amounts of information without customized hardware. The system provides the feasibility of complex large-area tactile applications, for instance in robotics.
Asunto(s)
Robótica/normas , Tacto/fisiología , Dispositivos Electrónicos Vestibles/normas , Computadores , Humanos , Robótica/tendencias , Dispositivos Electrónicos Vestibles/tendenciasRESUMEN
Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject's motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.
Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía/métodos , Movimiento/fisiología , Redes Neurales de la Computación , Algoritmos , Mano/fisiología , Humanos , Aprendizaje AutomáticoRESUMEN
Reusing the tactile knowledge of some previously-explored objects (prior objects) helps us to easily recognize the tactual properties of new objects. In this paper, we enable a robotic arm equipped with multi-modal artificial skin, like humans, to actively transfer the prior tactile exploratory action experiences when it learns the detailed physical properties of new objects. These experiences, or prior tactile knowledge, are built by the feature observations that the robot perceives from multiple sensory modalities, when it applies the pressing, sliding, and static contact movements on objects with different action parameters. We call our method Active Prior Tactile Knowledge Transfer (APTKT), and systematically evaluated its performance by several experiments. Results show that the robot improved the discrimination accuracy by around 10 % when it used only one training sample with the feature observations of prior objects. By further incorporating the predictions from the observation models of prior objects as auxiliary features, our method improved the discrimination accuracy by over 20 % . The results also show that the proposed method is robust against transferring irrelevant prior tactile knowledge (negative knowledge transfer).
RESUMEN
In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture's focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations - making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to â¼+6%).
Asunto(s)
Modelos Neurológicos , Robótica , Percepción Visual , Animales , Simulación por Computador , Humanos , Neuronas , Programas Informáticos , Corteza VisualRESUMEN
In this paper, we present an extended mathematical model of the central pattern generator (CPG) in the spinal cord. The proposed CPG model is used as the underlying low-level controller of a humanoid robot to generate various walking patterns. Such biological mechanisms have been demonstrated to be robust in locomotion of animal. Our model is supported by two neurophysiological studies. The first study identified a neural circuitry consisting of a two-layered CPG, in which pattern formation and rhythm generation are produced at different levels. The second study focused on a specific neural model that can generate different patterns, including oscillation. This neural model was employed in the pattern generation layer of our CPG, which enables it to produce different motion patterns-rhythmic as well as non-rhythmic motions. Due to the pattern-formation layer, the CPG is able to produce behaviors related to the dominating rhythm (extension/flexion) and rhythm deletion without rhythm resetting. The proposed multi-layered multi-pattern CPG model (MLMP-CPG) has been deployed in a 3D humanoid robot (NAO) while it performs locomotion tasks. The effectiveness of our model is demonstrated in simulations and through experimental results.
Asunto(s)
Simulación por Computador , Locomoción , Modelos Neurológicos , Robótica , Animales , Humanos , Médula Espinal , CaminataRESUMEN
In the field of sensory neuroprostheses, one ultimate goal is for individuals to perceive artificial somatosensory information and use the prosthesis with high complexity that resembles an intact system. To this end, research has shown that stimulation-elicited somatosensory information improves prosthesis perception and task performance. While studies strive to achieve sensory integration, a crucial phenomenon that entails naturalistic interaction with the environment, this topic has not been commensurately reviewed. Therefore, here we present a perspective for understanding sensory integration in neuroprostheses. First, we review the engineering aspects and functional outcomes in sensory neuroprosthesis studies. In this context, we summarize studies that have suggested sensory integration. We focus on how they have used stimulation-elicited percepts to maximize and improve the reliability of somatosensory information. Next, we review studies that have suggested multisensory integration. These works have demonstrated that congruent and simultaneous multisensory inputs provided cognitive benefits such that an individual experiences a greater sense of authority over prosthesis movements (i.e., agency) and perceives the prosthesis as part of their own (i.e., ownership). Thereafter, we present the theoretical and neuroscience framework of sensory integration. We investigate how behavioral models and neural recordings have been applied in the context of sensory integration. Sensory integration models developed from intact-limb individuals have led the way to sensory neuroprosthesis studies to demonstrate multisensory integration. Neural recordings have been used to show how multisensory inputs are processed across cortical areas. Lastly, we discuss some ongoing research and challenges in achieving and understanding sensory integration in sensory neuroprostheses. Resolving these challenges would help to develop future strategies to improve the sensory feedback of a neuroprosthetic system.
Asunto(s)
Prótesis Neurales , HumanosRESUMEN
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10â¯s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9â¯Hz, alpha: 9-13â¯Hz, low-beta: 13-21â¯Hz, high beta: 21-32â¯Hz, and gamma: 32-50â¯Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90â¯%, left-prefrontal: 75.93â¯%). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47â¯%, left-frontal: 90.88â¯%, right-frontal: 87.74â¯%). During familiarization, the decrease of theta ERD is superior in the first part (1-5â¯s) whereas the decrease of gamma ERD is superior in the second part (5-9â¯s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Asunto(s)
Música , Humanos , Electroencefalografía/métodos , Encéfalo , Percepción Auditiva/fisiología , Mapeo EncefálicoRESUMEN
Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.
Asunto(s)
Robótica , Humanos , Encéfalo , Aprendizaje , Algoritmos , Simulación por ComputadorRESUMEN
The grip force dynamics during grasping and lifting of diversely weighted objects are highly informative about an individual's level of sensorimotor control and potential neurological condition. Therefore, grip force profiles might be used for assessment and bio-feedback training during neurorehabilitation therapy. Modern neurorehabilitation methods, such as exoskeleton-assisted grasping and virtual-reality-based hand function training, strongly differ from classical grasp-and-lift experiments which might influence the sensorimotor control of grasping and thus the characteristics of grip force profiles. In this feasibility study with six healthy participants, we investigated the changes in grip force profiles during exoskeleton-assisted grasping and grasping of virtual objects. Our results show that a light-weight and highly compliant hand exoskeleton is able to assist users during grasping while not removing the core characteristics of their grip force dynamics. Furthermore, we show that when participants grasp objects with virtual weights, they adapt quickly to unknown virtual weights and choose efficient grip forces. Moreover, predictive overshoot forces are produced that match inertial forces which would originate from a physical object of the same weight. In summary, these results suggest that users of advanced neurorehabilitation methods employ and adapt their prior internal forward models for sensorimotor control of grasping. Incorporating such insights about the grip force dynamics of human grasping in the design of neurorehabilitation methods, such as hand exoskeletons, might improve their usability and rehabilitative function.
Asunto(s)
Dispositivo Exoesqueleto , Desempeño Psicomotor , Humanos , Dedos , Fuerza de la Mano , Extremidad SuperiorRESUMEN
The use of soft and flexible bioelectronic interfaces can enhance the quality for recording cells' electrical activity by ensuring a continuous and intimate contact with the smooth, curving surfaces found in the physiological environment. This work develops soft microelectrode arrays (MEAs) made of silk fibroin (SF) films for recording interfaces that can also serve as a drug delivery system. Inkjet printing is used as a tool to deposit the substrate, conductive electrode, and insulator, as well as a drug-delivery nanocomposite film. This approach is highly versatile, as shown in the fabrication of carbon microelectrodes, sandwiched between a silk substrate and a silk insulator. The technique permits the development of thin-film devices that can be employed for in vitro extracellular recordings of HL-1 cell action potentials. The tuning of SF by applying an electrical stimulus to produce a permeable layer that can be used in on-demand drug delivery systems is also demonstrated. The multifunctional MEA developed here can pave the way for in vitro drug screening by applying time-resolved and localized chemical stimuli.
Asunto(s)
Fibroínas , Seda , Microelectrodos , Sistemas de Liberación de Medicamentos , Conductividad EléctricaRESUMEN
Essential Tremor (ET) is the most frequent movement disorder in adults. Upper-limb exoskeletons are a promising solution to alleviate ET symptoms. We propose a novel wrist exoskeleton for tremor damping. The TuMove exoskeleton is light-weight, portable, easy to use, and designed for ADLs and activities requiring hand dexterity. We validated the effectiveness of our exoskeleton by inducing forearm tremors using TENS on 5 healthy subjects. Our results show that wrist ranges are generally kept in most of the ROM needed in ADLs. The damping system reduced more than 30% of the tremor's angular velocity during drinking and pouring tasks. Furthermore, the completion time of the Archimedes spiral was decreased by 2.76 seconds (13.0%) and for the 9-Hole-Peg-Test by 2.77 seconds (11.8 %), indicating a performance improvement in dexterity tasks.
Asunto(s)
Temblor Esencial , Dispositivo Exoesqueleto , Estimulación Eléctrica Transcutánea del Nervio , Adulto , Humanos , Muñeca , Temblor , Actividades Cotidianas , Extremidad SuperiorRESUMEN
When a human and machine collaborate on a shared task, ambiguous events might occur that could be perceived as an error by the human partner. In such events, spontaneous error-related potentials (ErrPs) are evoked in the human brain. Knowing whom the human perceived as responsible for the error would help a machine in co-adaptation and shared control paradigms to better adapt to human preferences. Therefore, we ask whether self- and agent-related errors evoke different ErrPs. Eleven subjects participated in an electroencephalography human-agent collaboration experiment with a collaborative trajectory-following task on two collaboration levels, where movement errors occurred as trajectory deviations. Independently of the collaboration level, we observed a higher amplitude of the responses on the midline central Cz electrode for self-related errors compared to observed errors made by the agent. On average, Support Vector Machines classified self- and agent-related errors with 72.64% accuracy using subject-specific features. These results demonstrate that ErrPs can tell if a person relates an error to themselves or an external autonomous agent during collaboration. Thus, the collaborative machine will receive more informed feedback for the error attribution that allows appropriate error identification, a possibility for correction, and avoidance in future actions.
Asunto(s)
Interfaces Cerebro-Computador , Humanos , Electroencefalografía , Máquina de Vectores de Soporte , Movimiento , AclimataciónRESUMEN
Understanding the human brain's perception of different thermal sensations has sparked the interest of many neuroscientists. The identification of distinct brain patterns when processing thermal stimuli has several clinical applications, such as phantom-limb pain prediction, as well as increasing the sense of embodiment when interacting with neurorehabilitation devices. Notwithstanding the remarkable number of studies that have touched upon this research topic, understanding how the human brain processes different thermal stimuli has remained elusive. More importantly, very intense thermal stimuli perception dynamics, their related cortical activations, as well as their decoding using effective features are still not fully understood. In this study, using electroencephalography (EEG) recorded from three healthy human subjects, we identified spatial, temporal, and spectral patterns of brain responses to different thermal stimulations ranging from extremely cold and hot stimuli (very intense), moderately cold and hot stimuli (intense), to a warm stimulus (innocuous). Our results show that very intense thermal stimuli elicit a decrease in alpha power compared to intense and innocuous stimulations. Spatio-temporal analysis reveals that in the first 400 ms post-stimulus, brain activity increases in the prefrontal and central brain areas for very intense stimulations, whereas for intense stimulation, high activity of the parietal area was observed post-500 ms. Based on these identified EEG patterns, we successfully classified the different thermal stimulations with an average test accuracy of 84% across all subjects. En route to understanding the underlying cortical activity, we source localized the EEG signal for each of the five thermal stimuli conditions. Our findings reveal that very intense stimuli were anticipated and induced early activation (before 400 ms) of the anterior cingulate cortex (ACC). Moreover, activation of the pre-frontal cortex, somatosensory, central, and parietal areas, was observed in the first 400 ms post-stimulation for very intense conditions and starting 500 ms post-stimuli for intense conditions. Overall, despite the small sample size, this work presents novel findings and a first comprehensive approach to explore, analyze, and classify EEG-brain activity changes evoked by five different thermal stimuli, which could lead to a better understanding of thermal stimuli processing in the brain and could, therefore, pave the way for developing a real-time withdrawal reaction system when interacting with prosthetic limbs. We underpin this last point by benchmarking our EEG results with a demonstration of a real-time withdrawal reaction of a robotic prosthesis using a human-like artificial skin.
Asunto(s)
EncéfaloRESUMEN
This paper presents a visually-guided grip selection based on the combination of object recognition and tactile feedback of a soft-hand exoskeleton intended for hand rehabilitation. A pre-trained neural network is used to recognize the object in front of the hand exoskeleton, which is then mapped to a suitable grip type. With the object cue, it actively assists users in performing different grip movements without calibration. In a pilot experiment, one healthy user completed four different grasp-and-move tasks repeatedly. All trials were completed within 25 seconds and only one out of 20 trials failed. This shows that automated movement training can be achieved by visual guidance even without biomedical sensors. In particular, in the private setting at home without clinical supervision, it is a powerful tool for repetitive training of daily-living activities.