RESUMO
In everyday life, people differ in their sound perception and thus sound processing. Some people may be distracted by construction noise, while others do not even notice. With smartphone-based mobile ear-electroencephalography (ear-EEG), we can measure and quantify sound processing in everyday life by analysing presented sounds and also naturally occurring ones. Twenty-four participants completed four controlled conditions in the lab (1 h) and one condition in the office (3 h). All conditions used the same paired-click stimuli. In the lab, participants listened to click tones under four different instructions: no task towards the sounds, reading a newspaper article, listening to an audio article or counting a rare deviant sound. In the office recording, participants followed daily activities while they were sporadically presented with clicks, without any further instruction. In the beyond-the-lab condition, in addition to the presented sounds, environmental sounds were recorded as acoustic features (i.e., loudness, power spectral density and sounds onsets). We found task-dependent differences in the auditory event-related potentials (ERPs) to the presented click sounds in all lab conditions, which underline that neural processes related to auditory attention can be differentiated with ear-EEG. In the beyond-the-lab condition, we found ERPs comparable to some of the lab conditions. The N1 amplitude to the click sounds beyond the lab was dependent on the background noise, probably due to energetic masking. Contrary to our expectation, we did not find a clear ERP in response to the environmental sounds. Overall, we showed that smartphone-based ear-EEG can be used to study sound processing of well defined-stimuli in everyday life.
RESUMO
Most research investigating auditory perception is conducted in controlled laboratory settings, potentially restricting its generalizability to the complex acoustic environment outside the lab. The present study, in contrast, investigated auditory attention with long-term recordings (> 6 h) beyond the lab using a fully mobile, smartphone-based ear-centered electroencephalography (EEG) setup with minimal restrictions for participants. Twelve participants completed iterations of two variants of an oddball task where they had to react to target tones and to ignore standard tones. A rapid variant of the task (tones every 2 s, 5 min total time) was performed seated and with full focus in the morning, around noon and in the afternoon under controlled conditions. A sporadic variant (tones every minute, 160 min total time) was performed once in the morning and once in the afternoon while participants followed their normal office day routine. EEG data, behavioral data, and movement data (with a gyroscope) were recorded and analyzed. The expected increased amplitude of the P3 component in response to the target tone was observed for both the rapid and the sporadic oddball. Miss rates were lower and reaction times were faster in the rapid oddball compared to the sporadic one. The movement data indicated that participants spent most of their office day at relative rest. Overall, this study demonstrated that it is feasible to study auditory perception in everyday life with long-term ear-EEG.
Assuntos
Potenciais Evocados Auditivos , Potenciais Evocados , Estimulação Acústica , Atenção , Percepção Auditiva , Eletroencefalografia , Humanos , Tempo de ReaçãoRESUMO
Ear-EEG allows to record brain activity in every-day life, for example to study natural behaviour or unhindered social interactions. Compared to conventional scalp-EEG, ear-EEG uses fewer electrodes and covers only a small part of the head. Consequently, ear-EEG will be less sensitive to some cortical sources. Here, we perform realistic electromagnetic simulations to compare cEEGrid ear-EEG with 128-channel cap-EEG. We compute the sensitivity of ear-EEG for different cortical sources, and quantify the expected signal loss of ear-EEG relative to cap-EEG. Our results show that ear-EEG is most sensitive to sources in the temporal cortex. Furthermore, we show how ear-EEG benefits from a multi-channel configuration (i.e. cEEGrid). The pipelines presented here can be adapted to any arrangement of electrodes and can therefore provide an estimate of sensitivity to cortical regions, thereby increasing the chance of successful experiments using ear-EEG.
Assuntos
Eletroencefalografia , Cabeça , Eletrodos , HumanosRESUMO
Options for people with severe paralysis who have lost the ability to communicate orally are limited. We describe a method for communication in a patient with late-stage amyotrophic lateral sclerosis (ALS), involving a fully implanted brain-computer interface that consists of subdural electrodes placed over the motor cortex and a transmitter placed subcutaneously in the left side of the thorax. By attempting to move the hand on the side opposite the implanted electrodes, the patient accurately and independently controlled a computer typing program 28 weeks after electrode placement, at the equivalent of two letters per minute. The brain-computer interface offered autonomous communication that supplemented and at times supplanted the patient's eye-tracking device. (Funded by the Government of the Netherlands and the European Union; ClinicalTrials.gov number, NCT02224469 .).
Assuntos
Esclerose Lateral Amiotrófica/reabilitação , Afonia/reabilitação , Interfaces Cérebro-Computador , Auxiliares de Comunicação para Pessoas com Deficiência , Quadriplegia/reabilitação , Esclerose Lateral Amiotrófica/complicações , Afonia/etiologia , Eletrodos Implantados , Feminino , Humanos , Pessoa de Meia-Idade , Córtex Motor , Reabilitação Neurológica/instrumentação , Quadriplegia/etiologiaRESUMO
Neural oscillations can synchronize to external rhythmic stimuli, as for example in speech and music. While previous studies have mainly focused on elucidating the fundamental concept of neural entrainment, less is known about the time course of entrainment. In this human electroencephalography (EEG) study, we unravel the temporal evolution of neural entrainment by contrasting short and long periods of rhythmic stimulation. Listeners had to detect short silent gaps that were systematically distributed with respect to the phase of a 3 Hz frequency-modulated tone. We found that gap detection performance was modulated by the stimulus stream with a consistent stimulus phase across participants for short and long stimulation. Electrophysiological analysis confirmed neural entrainment effects at 3 Hz and the 6 Hz harmonic for both short and long stimulation lengths. 3 Hz source level analysis revealed that longer stimulation resulted in a phase shift of a participant's neural phase relative to the stimulus phase. Phase coupling increased over the first second of stimulation, but no effects for phase coupling strength were observed over time. The dynamic evolution of phase alignment suggests that the brain attunes to external rhythmic stimulation by adapting the brain's internal representation of incoming environmental stimuli.
Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ondas Encefálicas/fisiologia , Sincronização de Fases em Eletroencefalografia/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto JovemRESUMO
The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.
Assuntos
Estimulação Acústica , Atenção/fisiologia , Eletroencefalografia , Adulto , Córtex Auditivo/fisiologia , Percepção Auditiva , Sinais (Psicologia) , Feminino , Lateralidade Funcional/fisiologia , Voluntários Saudáveis , Humanos , Masculino , Pessoa de Meia-Idade , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Adulto JovemRESUMO
Electrocorticography (ECoG) based Brain-Computer Interfaces (BCIs) have been proposed as a way to restore and replace motor function or communication in severely paralyzed people. To date, most motor-based BCIs have either focused on the sensorimotor cortex as a whole or on the primary motor cortex (M1) as a source of signals for this purpose. Still, target areas for BCI are not confined to M1, and more brain regions may provide suitable BCI control signals. A logical candidate is the primary somatosensory cortex (S1), which not only shares similar somatotopic organization to M1, but also has been suggested to have a role beyond sensory feedback during movement execution. Here, we investigated whether four complex hand gestures, taken from the American sign language alphabet, can be decoded exclusively from S1 using both spatial and temporal information. For decoding, we used the signal recorded from a small patch of cortex with subdural high-density (HD) grids in five patients with intractable epilepsy. Notably, we introduce a new method of trial alignment based on the increase of the electrophysiological response, which virtually eliminates the confounding effects of systematic and non-systematic temporal differences within and between gestures execution. Results show that S1 classification scores are high (76%), similar to those obtained from M1 (74%) and sensorimotor cortex as a whole (85%), and significantly above chance level (25%). We conclude that S1 offers characteristic spatiotemporal neuronal activation patterns that are discriminative between gestures, and that it is possible to decode gestures with high accuracy from a very small patch of cortex using subdurally implanted HD grids. The feasibility of decoding hand gestures using HD-ECoG grids encourages further investigation of implantable BCI systems for direct interaction between the brain and external devices with multiple degrees of freedom.
Assuntos
Eletrocorticografia/métodos , Gestos , Língua de Sinais , Córtex Somatossensorial/fisiologia , Adulto , Mapeamento Encefálico , Interfaces Cérebro-Computador , Eletrodos Implantados , Epilepsia/cirurgia , Feminino , Ritmo Gama , Mãos , Humanos , Masculino , Pessoa de Meia-Idade , Córtex Motor/fisiologia , Análise de Ondaletas , Adulto JovemRESUMO
Cochlear implant (CI) users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH) controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users' speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS). Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.
Assuntos
Córtex Auditivo/fisiopatologia , Implantes Cocleares , Perda Auditiva Neurossensorial/fisiopatologia , Plasticidade Neuronal/fisiologia , Córtex Visual/fisiopatologia , Estimulação Acústica , Adulto , Idoso , Mapeamento Encefálico/métodos , Implante Coclear , Feminino , Neuroimagem Funcional/métodos , Perda Auditiva Neurossensorial/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto JovemRESUMO
Motor imagery (MI) combined with real-time electroencephalogram (EEG) feedback is a popular approach for steering brain-computer interfaces (BCI). MI BCI has been considered promising as add-on therapy to support motor recovery after stroke. Yet whether EEG neurofeedback indeed targets specific sensorimotor activation patterns cannot be unambiguously inferred from EEG alone. We combined MI EEG neurofeedback with concurrent and continuous functional magnetic resonance imaging (fMRI) to characterize the relationship between MI EEG neurofeedback and activation in cortical sensorimotor areas. EEG signals were corrected online from interfering MRI gradient and ballistocardiogram artifacts, enabling the delivery of real-time EEG feedback. Significantly enhanced task-specific brain activity during feedback compared to no feedback blocks was present in EEG and fMRI. Moreover, the contralateral MI related decrease in EEG sensorimotor rhythm amplitude correlated inversely with fMRI activation in the contralateral sensorimotor areas, whereas a lateralized fMRI pattern did not necessarily go along with a lateralized EEG pattern. Together, the findings indicate a complex relationship between MI EEG signals and sensorimotor cortical activity, whereby both are similarly modulated by EEG neurofeedback. This finding supports the potential of MI EEG neurofeedback for motor rehabilitation and helps to better understand individual differences in MI BCI performance.
Assuntos
Eletroencefalografia/métodos , Imaginação/fisiologia , Movimento , Neurorretroalimentação , Córtex Sensório-Motor/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto JovemRESUMO
The role of low frequency oscillations in language areas is not yet understood. Using ECoG in six human subjects, we studied whether different language regions show prominent power changes in a specific rhythm, in similar manner as the alpha rhythm shows the most prominent power changes in visual areas. Broca's area and temporal language areas were localized in individual subjects using fMRI. In these areas, the theta rhythm showed the most pronounced power changes and theta power decreased significantly during verb generation. To better understand the role of this language-related theta decrease, we then studied the interaction between low frequencies and local neuronal activity reflected in high frequencies. Amplitude-amplitude correlations showed that theta power correlated negatively with high frequency activity, specifically across verb generation trials. Phase-amplitude coupling showed that during control trials, high frequency power was coupled to theta phase, but this coupling decreased significantly during verb generation trials. These results suggest a dynamic interaction between the neuronal mechanisms underlying the theta rhythm and local neuronal activity in language areas. As visual areas show a pronounced alpha rhythm that may reflect pulsed inhibition, language regions show a pronounced theta rhythm with highly similar features.
Assuntos
Lobo Frontal/fisiologia , Idioma , Lobo Temporal/fisiologia , Ritmo Teta/fisiologia , Adulto , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiologia , Adulto JovemRESUMO
Mental calculation is a complex mental procedure involving a frontoparietal network of brain regions. Functional MRI (fMRI) studies have revealed interesting characteristics of these regions, but the precise function of some areas remains elusive. In the present study, we used electrocorticographic (ECoG) recordings to chronometrically assess the neuronal processes during mental arithmetic. A calculation task was performed during presurgical 3T fMRI scanning and subsequent ECoG monitoring. Mental calculation induced an increase in fMRI blood oxygen level dependent signal in prefrontal, parietal and lower temporo-occipital regions. The group-fMRI result was subsequently used to cluster the implanted electrodes into anatomically defined regions of interest (ROIs). We observed remarkable differences in high frequency power profiles between ROIs, some of which were closely associated with stimulus presentation and others with the response. Upon stimulus presentation, occipital areas were the first to respond, followed by parietal and frontal areas, and finally by motor areas. Notably, we demonstrate that the fMRI activation in the middle frontal gyrus/precentral gyrus is associated with two subfunctions during mental calculation. This finding reveals the significance of the temporal dynamics of neural ensembles within regions with an apparent uniform function. In conclusion, our results shed more light on the spatiotemporal aspects of brain activation during a mental calculation task, and demonstrate that the use of fMRI data to cluster ECoG electrodes is a useful approach for ECoG group analysis.
Assuntos
Encéfalo/fisiologia , Conceitos Matemáticos , Pensamento/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Eletrodos Implantados , Eletroencefalografia , Epilepsia/fisiopatologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Adulto JovemRESUMO
Decoding movements from the human cortex has been a topic of great interest for controlling an artificial limb in non-human primates and severely paralyzed people. Here we investigate feasibility of decoding gestures from the sensorimotor cortex in humans, using 7 T fMRI. Twelve healthy volunteers performed four hand gestures from the American Sign Language Alphabet. These gestures were performed in a rapid event related design used to establish the classifier and a slow event-related design, used to test the classifier. Single trial patterns were classified using a pattern-correlation classifier. The four hand gestures could be classified with an average accuracy of 63 % (range 3595 %), which was significantly above chance (25 %). The hand region was, as expected, the most active region, and the optimal volume for classification was on average about 200 voxels, although this varied considerably across individuals. Importantly, classification accuracy correlated significantly with consistency of gesture execution. The results of our study demonstrate that decoding gestures from the hand region of the sensorimotor cortex using 7 T fMRI can reach very high accuracy, provided that gestures are executed in a consistent manner. Our results further indicate that the neuronal representation of hand gestures is robust and highly reproducible. Given that the most active foci were located in the hand region, and that 7 T fMRI has been shown to agree with electrocorticography, our results suggest that this confined region could serve to decode sign language gestures for intracranial braincomputer interfacing using surface grids.
Assuntos
Gestos , Mãos/fisiologia , Córtex Motor/fisiologia , Movimento/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto JovemRESUMO
Background. Mobile ear-EEG provides the opportunity to record EEG unobtrusively in everyday life. However, in real-life, the EEG data quickly becomes difficult to interpret, as the neural signal is contaminated by other, non-neural signal contributions. Due to the small number of electrodes in ear-EEG devices, the interpretation of the EEG becomes even more difficult. For meaningful and reliable ear-EEG, it is crucial that the brain signals we wish to record in real life are well-understood and that we make optimal use of the available electrodes. Their placement should be guided by prior knowledge about the characteristics of the signal of interest.Objective.We want to understand the signal we record with ear-EEG and make recommendations on how to optimally place a limited number of electrodes.Approach.We built a high-density ear-EEG with 31 channels spaced densely around one ear. We used it to record four auditory event-related potentials (ERPs): the mismatch negativity, the P300, the N100 and the N400. With this data, we gain an understanding of how different stages of auditory processing are reflected in ear-EEG. We investigate the electrode configurations that carry the most information and use a mass univariate ERP analysis to identify the optimal channel configuration. We additionally use a multivariate approach to investigate the added value of multi-channel recordings.Main results.We find significant condition differences for all ERPs. The different ERPs vary considerably in their spatial extent and different electrode positions are necessary to optimally capture each component. In the multivariate analysis, we find that the investigation of the ERPs benefits strongly from multi-channel ear-EEG.Significance.Our work emphasizes the importance of a strong theoretical and practical background when building and using ear-EEG. We provide recommendations on finding the optimal electrode positions. These results will guide future research employing ear-EEG in real-life scenarios.
Assuntos
Eletroencefalografia , Potenciais Evocados , Humanos , Masculino , Feminino , Eletroencefalografia/métodos , Percepção Auditiva , Eletrodos , EncéfaloRESUMO
Surgical personnel face various stressors in the workplace, including environmental sounds. Mobile electroencephalography (EEG) offers a promising approach for objectively measuring how individuals perceive sounds. Because surgical performance does not necessarily decrease with higher levels of distraction, EEG could help guide noise reduction strategies that are independent of performance measures. In this study, we utilized mobile EEG to explore how a realistic soundscape is perceived during simulated laparoscopic surgery. To examine the varying demands placed on personnel in different situations, we manipulated the cognitive demand during the surgical task, using a memory task. To assess responses to the soundscape, we calculated event-related potentials for distinct sound events and temporal response functions for the ongoing soundscape. Although participants reported varying degrees of demand under different conditions, no significant effects were observed on surgical task performance or EEG parameters. However, changes in surgical task performance and EEG parameters over time were noted, while subjective results remained consistent over time. These findings highlight the importance of using multiple measures to fully understand the complex relationship between sound processing and cognitive demand. Furthermore, in the context of combined EEG and audio recordings in real-life scenarios, a sparse representation of the soundscape has the advantage that it can be recorded in a data-protected way compared to more detailed representations. However, it is unclear whether information get lost with sparse representations. Our results indicate that sparse and detailed representations are equally effective in eliciting neural responses. Overall, this study marks a significant step towards objectively investigating sound processing in applied settings.
Assuntos
Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Adulto , Laparoscopia/métodos , Adulto Jovem , Percepção Auditiva/fisiologia , Análise e Desempenho de Tarefas , Estimulação Acústica/métodos , Som , Potenciais Evocados/fisiologia , Estresse Ocupacional/fisiopatologiaRESUMO
The c-grid (ear-electroencephalography, sold under the name cEEGrid) is an unobtrusive and comfortable electrode array that can be used for investigating brain activity after affixing around the ear. The c-grid is suitable for use outside of the laboratory for long durations, even for the whole day. Various cognitive processes can be studied using these grids, as shown by previous research, including research beyond the lab. To record high-quality ear-EEG data, careful preparation is necessary. In this protocol, we explain the steps needed for its successful implementation. First, how to test the functionality of the grid prior to a recording is shown. Second, a description is provided on how to prepare the participant and how to fit the c-grid, which is the most important step for recording high-quality data. Third, an outline is provided on how to connect the grids to an amplifier and how to check the signal quality. In this protocol, we list best practice recommendations and tips that make c-grid recordings successful. If researchers follow this protocol, they are comprehensively equipped for experimenting with the c-grid both in and beyond the lab.
Assuntos
Amplificadores Eletrônicos , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Eletrodos , Sistemas Computacionais , EncéfaloRESUMO
Introduction: As our attention is becoming a commodity that an ever-increasing number of applications are competing for, investing in modern day tools and devices that can detect our mental states and protect them from outside interruptions holds great value. Mental fatigue and distractions are impacting our ability to focus and can cause workplace injuries. Electroencephalography (EEG) may reflect concentration, and if EEG equipment became wearable and inconspicuous, innovative brain-computer interfaces (BCI) could be developed to monitor mental load in daily life situations. The purpose of this study is to investigate the potential of EEG recorded inside and around the human ear to determine levels of attention and focus. Methods: In this study, mobile and wireless ear-EEG were concurrently recorded with conventional EEG (cap) systems to collect data during tasks related to focus: an N-back task to assess working memory and a mental arithmetic task to assess cognitive workload. The power spectral density (PSD) of the EEG signal was analyzed to isolate consistent differences between mental load conditions and classify epochs using step-wise linear discriminant analysis (swLDA). Results and discussion: Results revealed that spectral features differed statistically between levels of cognitive load for both tasks. Classification algorithms were tested on spectral features from twelve and two selected channels, for the cap and the ear-EEG. A two-channel ear-EEG model evaluated the performance of two dry in-ear electrodes specifically. Single-trial classification for both tasks revealed above chance-level accuracies for all subjects, with mean accuracies of: 96% (cap-EEG) and 95% (ear-EEG) for the twelve-channel models, 76% (cap-EEG) and 74% (in-ear-EEG) for the two-channel model for the N-back task; and 82% (cap-EEG) and 85% (ear-EEG) for the twelve-channel, 70% (cap-EEG) and 69% (in-ear-EEG) for the two-channel model for the arithmetic task. These results suggest that neural oscillations recorded with ear-EEG can be used to reliably differentiate between levels of cognitive workload and working memory, in particular when multi-channel recordings are available, and could, in the near future, be integrated into wearable devices.
RESUMO
Objective. Ear-EEG (electroencephalography) allows to record brain activity using only a few electrodes located close to the ear. Ear-EEG is comfortable and easy to apply, facilitating beyond-the-lab EEG recordings in everyday life. With the unobtrusive setup, a person wearing it can blend in, allowing unhindered EEG recordings in social situations. However, compared to classical cap-EEG, only a small part of the head is covered with electrodes. Most scalp positions that are known from established EEG research are not covered by ear-EEG electrodes, making the comparison between the two approaches difficult and might hinder the transition from cap-based lab studies to ear-based beyond-the-lab studies.Approach. We here provide a reference data-set comparing ear-EEG and cap-EEG directly for four different auditory event-related potentials (ERPs): N100, MMN, P300 and N400. We show how the ERPs are reflected when using only electrodes around the ears.Main results. We find that significant condition differences for all ERP-components could be recorded using only ear-electrodes. The effect sizes were moderate to high on the single subject level. Morphology and temporal evolution of signals recorded from around-the-ear resemble highly those from standard scalp-EEG positions. We found a reduction in effect size (signal loss) for the ear-EEG electrodes compared to cap-EEG of 21%-44%. The amount of signal loss depended on the ERP-component; we observed the lowest percentage signal loss for the N400 and the highest percentage signal loss for the N100. Our analysis further shows that no single channel position around the ear is optimal for recording all ERP-components or all participants, speaking in favor of multi-channel ear-EEG solutions.Significance. Our study provides reference results for future studies employing ear-EEG.
Assuntos
Eletroencefalografia , Potenciais Evocados , Orelha , Eletrodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Couro CabeludoRESUMO
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (< 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.
RESUMO
Introduction: In demanding work situations (e.g., during a surgery), the processing of complex soundscapes varies over time and can be a burden for medical personnel. Here we study, using mobile electroencephalography (EEG), how humans process workplace-related soundscapes while performing a complex audio-visual-motor task (3D Tetris). Specifically, we wanted to know how the attentional focus changes the processing of the soundscape as a whole. Method: Participants played a game of 3D Tetris in which they had to use both hands to control falling blocks. At the same time, participants listened to a complex soundscape, similar to what is found in an operating room (i.e., the sound of machinery, people talking in the background, alarm sounds, and instructions). In this within-subject design, participants had to react to instructions (e.g., "place the next block in the upper left corner") and to sounds depending on the experimental condition, either to a specific alarm sound originating from a fixed location or to a beep sound that originated from varying locations. Attention to the alarm reflected a narrow attentional focus, as it was easy to detect and most of the soundscape could be ignored. Attention to the beep reflected a wide attentional focus, as it required the participants to monitor multiple different sound streams. Results and discussion: Results show the robustness of the N1 and P3 event related potential response during this dynamic task with a complex auditory soundscape. Furthermore, we used temporal response functions to study auditory processing to the whole soundscape. This work is a step toward studying workplace-related sound processing in the operating room using mobile EEG.
RESUMO
OBJECTIVE: Brain-computer interfaces (BCIs) translate deliberate intentions and associated changes in brain activity into action, thereby offering patients with severe paralysis an alternative means of communication with and control over their environment. Such systems are not available yet, partly due to the high performance standard that is required. A major challenge in the development of implantable BCIs is to identify cortical regions and related functions that an individual can reliably and consciously manipulate. Research predominantly focuses on the sensorimotor cortex, which can be activated by imagining motor actions. However, because this region may not provide an optimal solution to all patients, other neuronal networks need to be examined. Therefore, we investigated whether the cognitive control network can be used for BCI purposes. We also determined the feasibility of using functional magnetic resonance imaging (fMRI) for noninvasive localization of the cognitive control network. METHODS: Three patients with intractable epilepsy, who were temporarily implanted with subdural grid electrodes for diagnostic purposes, attempted to gain BCI control using the electrocorticographic (ECoG) signal of the left dorsolateral prefrontal cortex (DLPFC). RESULTS: All subjects quickly gained accurate BCI control by modulation of gamma-power of the left DLPFC. Prelocalization of the relevant region was performed with fMRI and was confirmed using the ECoG signals obtained during mental calculation localizer tasks. INTERPRETATION: The results indicate that the cognitive control network is a suitable source of signals for BCI applications. They also demonstrate the feasibility of translating understanding about cognitive networks derived from functional neuroimaging into clinical applications.