RESUMO
How the brain responds temporally and spectrally when we listen to familiar versus unfamiliar musical sequences remains unclear. This study uses EEG techniques to investigate the continuous electrophysiological changes in the human brain during passive listening to familiar and unfamiliar musical excerpts. EEG activity was recorded in 20 participants while they passively listened to 10 s of classical music, and they were then asked to indicate their self-assessment of familiarity. We analyzed the EEG data in two manners: familiarity based on the within-subject design, i.e., averaging trials for each condition and participant, and familiarity based on the same music excerpt, i.e., averaging trials for each condition and music excerpt. By comparing the familiar condition with the unfamiliar condition and the local baseline, sustained low-beta power (12-16 Hz) suppression was observed in both analyses in fronto-central and left frontal electrodes after 800 ms. However, sustained alpha power (8-12 Hz) decreased in fronto-central and posterior electrodes after 850 ms only in the first type of analysis. Our study indicates that listening to familiar music elicits a late sustained spectral response (inhibition of alpha/low-beta power from 800 ms to 10 s). Moreover, the results showed that alpha suppression reflects increased attention or arousal/engagement due to listening to familiar music; nevertheless, low-beta suppression exhibits the effect of familiarity.NEW & NOTEWORTHY This study differentiates the dynamic temporal-spectral effects during listening to 10 s of familiar music compared with unfamiliar music. This study highlights that listening to familiar music leads to continuous suppression in the alpha and low-beta bands. This suppression starts â¼800 ms after the stimulus onset.
Assuntos
Música , Humanos , Eletroencefalografia/métodos , Encéfalo/fisiologia , Percepção Auditiva/fisiologia , Reconhecimento Psicológico/fisiologiaRESUMO
BACKGROUND AND OBJECTIVE: Laryngeal dystonia (LD) is focal task-specific dystonia, predominantly affecting speech but not whispering or emotional vocalizations. Prior neuroimaging studies identified brain regions forming a dystonic neural network and contributing to LD pathophysiology. However, the underlying temporal dynamics of these alterations and their contribution to the task-specificity of LD remain largely unknown. The objective of the study was to identify the temporal-spatial signature of altered cortical oscillations associated with LD pathophysiology. METHODS: We used high-density 128-electrode electroencephalography (EEG) recordings during symptomatic speaking and two asymptomatic tasks, whispering and writing, in 24 LD patients and 22 healthy individuals to investigate the spectral dynamics, spatial localization, and interregional effective connectivity of aberrant cortical oscillations within the dystonic neural network, as well as their relationship with LD symptomatology. RESULTS: Symptomatic speaking in LD patients was characterized by significantly increased gamma synchronization in the middle/superior frontal gyri, primary somatosensory cortex, and superior parietal lobule, establishing the altered prefrontal-parietal loop. Hyperfunctional connectivity from the left middle frontal gyrus to the right superior parietal lobule was significantly correlated with the age of onset and the duration of LD symptoms. Asymptomatic whisper in LD patients had not no statistically significant changes in any frequency band, whereas asymptomatic writing was characterized by significantly decreased synchronization of beta-band power localized in the right superior frontal gyrus. CONCLUSION: Task-specific oscillatory activity of prefrontal-parietal circuitry is likely one of the underlying mechanisms of aberrant heteromodal integration of information processing and transfer within the neural network leading to dystonic motor output. © 2023 International Parkinson and Movement Disorder Society.
Assuntos
Distonia , Distúrbios Distônicos , Transtornos dos Movimentos , Humanos , Imageamento por Ressonância Magnética , EncéfaloRESUMO
Task-specificity in isolated focal dystonias is a powerful feature that may successfully be targeted with therapeutic brain-computer interfaces. While performing a symptomatic task, the patient actively modulates momentary brain activity (disorder signature) to match activity during an asymptomatic task (target signature), which is expected to translate into symptom reduction.
Assuntos
Interfaces Cérebro-Computador , Distúrbios Distônicos , Distúrbios Distônicos/diagnóstico , Distúrbios Distônicos/terapia , HumanosRESUMO
Soft exosuits offer promise to support users in everyday workload tasks by providing assistance. However, acceptance of such systems remains low due to the difficulty of control compared with rigid mechatronic systems. Recently, there has been progress in developing control schemes for soft exosuits that move in line with user intentions. While initial results have demonstrated sufficient device performance, the assessment of user experience via the cognitive response has yet to be evaluated. To address this, we propose a soft pneumatic elbow exosuit designed based on our previous work to provide assistance in line with user expectations utilizing two existing state-of-the-art control methods consisting of a gravity compensation and myoprocessor based on muscle activation. A user experience study was conducted to assess whether the device moves naturally with user expectations and the potential for device acceptance by determining when the exosuit violated user expectations through the neuro-cognitive and motor response. Brain activity from electroencephalography (EEG) data revealed that subjects elicited error-related potentials (ErrPs) in response to unexpected exosuit actions, which were decodable across both control schemes with an average accuracy of 76.63 ± 1.73% across subjects. Additionally, unexpected exosuit actions were further decoded via the motor response from electromyography (EMG) and kinematic data with a grand average accuracy of 68.73 ± 6.83% and 77.52 ± 3.79% respectively. This work demonstrates the validation of existing state-of-the-art control schemes for soft wearable exosuits through the proposed soft pneumatic elbow exosuit. We demonstrate the feasibility of assessing device performance with respect to the cognitive response through decoding when the device violates user expectations in order to help understand and promote device acceptance.
Assuntos
Exoesqueleto Energizado , Robótica , Humanos , Cotovelo , Fenômenos Biomecânicos , CogniçãoRESUMO
BACKGROUND: Laryngeal dystonia (LD) is an isolated focal dystonia characterized by involuntary spasms in laryngeal muscles selectively impairing speech production. Anecdotal observations reported the worsening of LD symptoms in stressful or vocally demanding situations. OBJECTIVES: To examine the impact of surrounding audio-visual complexity on LD symptomatology for a better understanding of disorder phenomenology. METHODS: We developed well-controlled virtual reality (VR) environments of real-life interpersonal communications to investigate how different levels of audio-visual complexity may impact LD symptoms. The VR experiments were conducted over five consecutive days, during which each patient experienced 10 h of 4100 experimental trials in VR with gradually increasing audio-visual complexity. Daily reports were collected about patients' voice changes, as well as their comfort, engagement, concentration, and drowsiness from using VR technology. RESULTS: After a weekly VR exposure, 82% of patients reported changes in their voice symptoms related to changes in background audio-visual complexity. Significant differences in voice symptoms were found between the first two levels of the audio-visual challenge complexity independent of study sessions or VR environments. CONCLUSION: This study demonstrated that LD symptoms are impacted by audio-visual background across various virtual realistic settings. These findings should be taken into consideration when planning behavioral experiments or evaluating the outcomes of clinical trials in these patients. Moreover, these data show that VR presents a reliable and useful technology for providing real-life assessments of the impact of various experimental settings, such as during the testing of novel therapeutic interventions in these patients. LEVEL OF EVIDENCE: Level 3 Laryngoscope, 2024.
RESUMO
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10â¯s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9â¯Hz, alpha: 9-13â¯Hz, low-beta: 13-21â¯Hz, high beta: 21-32â¯Hz, and gamma: 32-50â¯Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90â¯%, left-prefrontal: 75.93â¯%). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47â¯%, left-frontal: 90.88â¯%, right-frontal: 87.74â¯%). During familiarization, the decrease of theta ERD is superior in the first part (1-5â¯s) whereas the decrease of gamma ERD is superior in the second part (5-9â¯s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Assuntos
Música , Humanos , Eletroencefalografia/métodos , Encéfalo , Percepção Auditiva/fisiologia , Mapeamento EncefálicoRESUMO
Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.
Assuntos
Robótica , Humanos , Encéfalo , Aprendizagem , Algoritmos , Simulação por ComputadorRESUMO
When a human and machine collaborate on a shared task, ambiguous events might occur that could be perceived as an error by the human partner. In such events, spontaneous error-related potentials (ErrPs) are evoked in the human brain. Knowing whom the human perceived as responsible for the error would help a machine in co-adaptation and shared control paradigms to better adapt to human preferences. Therefore, we ask whether self- and agent-related errors evoke different ErrPs. Eleven subjects participated in an electroencephalography human-agent collaboration experiment with a collaborative trajectory-following task on two collaboration levels, where movement errors occurred as trajectory deviations. Independently of the collaboration level, we observed a higher amplitude of the responses on the midline central Cz electrode for self-related errors compared to observed errors made by the agent. On average, Support Vector Machines classified self- and agent-related errors with 72.64% accuracy using subject-specific features. These results demonstrate that ErrPs can tell if a person relates an error to themselves or an external autonomous agent during collaboration. Thus, the collaborative machine will receive more informed feedback for the error attribution that allows appropriate error identification, a possibility for correction, and avoidance in future actions.
Assuntos
Interfaces Cérebro-Computador , Humanos , Eletroencefalografia , Máquina de Vetores de Suporte , Movimento , AclimataçãoRESUMO
This paper presents a visually-guided grip selection based on the combination of object recognition and tactile feedback of a soft-hand exoskeleton intended for hand rehabilitation. A pre-trained neural network is used to recognize the object in front of the hand exoskeleton, which is then mapped to a suitable grip type. With the object cue, it actively assists users in performing different grip movements without calibration. In a pilot experiment, one healthy user completed four different grasp-and-move tasks repeatedly. All trials were completed within 25 seconds and only one out of 20 trials failed. This shows that automated movement training can be achieved by visual guidance even without biomedical sensors. In particular, in the private setting at home without clinical supervision, it is a powerful tool for repetitive training of daily-living activities.
Assuntos
Exoesqueleto Energizado , Mãos , Força da Mão , Humanos , Movimento , TatoRESUMO
Accurate and low-power decoding of brain signals such as electroencephalography (EEG) is key to constructing brain-computer interface (BCI) based wearable devices. While deep learning approaches have progressed substantially in terms of decoding accuracy, their power consumption is relatively high for mobile applications. Neuromorphic hardware arises as a promising solution to tackle this problem since it can run massive spiking neural networks with energy consumption orders of magnitude lower than traditional hardware. Herein, we show the viability of directly mapping a continuous-valued convolutional neural network for motor imagery EEG classification to a spiking neural network. The converted network, able to run on the SpiNNaker neuromorphic chip, only shows a 1.91% decrease in accuracy after conversion. Thus, we take full advantage of the benefits of both deep learning accuracies and low-power neuro-inspired hardware, properties that are key for the development of wearable BCI devices.
Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Algoritmos , Eletroencefalografia , Redes Neurais de ComputaçãoRESUMO
Exoskeletons and prosthetic devices controlled using brain-computer interfaces (BCIs) can be prone to errors due to inconsistent decoding. In recent years, it has been demonstrated that error-related potentials (ErrPs) can be used as a feedback signal in electroencephalography (EEG) based BCIs. However, modern BCIs often take large setup times and are physically restrictive, making them impractical for everyday use. In this paper, we use a mobile and easy-to-setup EEG device to investigate whether an erroneously functioning 1-DOF exoskeleton in different conditions, namely, visually observing and wearing the exoskeleton, elicits a brain response that can be classified. We develop a pipeline that can be applied to these two conditions and observe from our experiments that there is evidence for neural responses from electrodes near regions associated with ErrPs in an environment that resembles the real world. We found that these error-related responses can be classified as ErrPs with accuracies ranging from 60% to 71%, depending on the condition and the subject. Our pipeline could be further extended to detect and correct erroneous exoskeleton behavior in real-world settings.
Assuntos
Interfaces Cérebro-Computador , Exoesqueleto Energizado , Encéfalo , Eletroencefalografia , Projetos PilotoRESUMO
Advances in neuroscience are inspiring developments in robotics and vice versa.
Assuntos
Interfaces Cérebro-Computador , Neurociências/instrumentação , Robótica/instrumentação , Bioengenharia , Biomimética , Humanos , Modelos NeurológicosRESUMO
Emotions play a critical role in rational and intelligent behavior; a better fundamental knowledge of them is indispensable for understanding higher order brain function. We propose a non-invasive brain-computer interface (BCI) system to feedback a person's affective state such that a closed-loop interaction between the participant's brain responses and the musical stimuli is established. We realized this concept technically in a functional prototype of an algorithm that generates continuous and controllable patterns of synthesized affective music in real-time, which is embedded within a BCI architecture. We evaluated our concept in two separate studies. In the first study, we tested the efficacy of our music algorithm by measuring subjective affective responses from 11 participants. In a second pilot study, the algorithm was embedded in a real-time BCI architecture to investigate affective closed-loop interactions in 5 participants. Preliminary results suggested that participants were able to intentionally modulate the musical feedback by self-inducing emotions (e.g., by recalling memories), suggesting that the system was able not only to capture the listener's current affective state in real-time, but also potentially provide a tool for listeners to mediate their own emotions by interacting with music. The proposed concept offers a tool to study emotions in the loop, promising to cast a complementary light on emotion-related brain research, particularly in terms of clarifying the interactive, spatio-temporal dynamics underlying affective processing in the brain.
Assuntos
Algoritmos , Percepção Auditiva/fisiologia , Interfaces Cérebro-Computador , Emoções/fisiologia , Adulto , Feminino , Humanos , Masculino , Música , Projetos PilotoRESUMO
OBJECTIVE: Error-related potentials (ErrP) have been proposed as an intuitive feedback signal decoded from the ongoing electroencephalogram (EEG) of a human observer for improving human-robot interaction (HRI). While recent demonstrations of this approach have successfully studied the use of ErrPs as a teaching signal for robot skill learning, so far, no efforts have been made towards HRI scenarios where mutual adaptations between human and robot are expected or required. These are collaborative or social interactive scenarios without predefined dominancy of the human partner and robots being perceived as intentional agents. Here we explore the usability of ErrPs as a feedback signal from the human for mediating co-adaptation in human-robot interaction. APPROACH: We experimentally demonstrate ErrPs-based mediation of co-adaptation in a human-robot interaction study where successful interaction depended on co-adaptive convergence to a consensus between them. While subjects adapted to the robot by reflecting upon its behavior, the robot adapted its behavior based on ErrPs decoded online from the human partner's ongoing EEG. MAIN RESULTS: ErrPs were decoded online in single trial with an avg. accuracy of 81.8% ± 8.0% across 13 subjects, which was sufficient for effective adaptation of robot behavior. Successful co-adaptation was demonstrated by significant improvements in human-robot interaction efficacy and efficiency, and by the robot behavior that emerged during co-adaptation. These results indicate the potential of ErrPs as a useful feedback signal for mediating co-adaptation in human-robot interaction as demonstrated in a practical example. SIGNIFICANCE: As robots become more widely embedded in society, methods for aligning them to human expectations and conventions will become increasingly important in the future. In this quest, ErrPs may constitute a promising complementary feedback signal for guiding adaptations towards human preferences. In this paper we extended previous research to less constrained HRI scenarios where mutual adaptations between human and robot are expected or required.