Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neural Eng ; 21(1)2024 01 09.
Artigo em Inglês | MEDLINE | ID: mdl-38118173

RESUMO

Background. Mobile ear-EEG provides the opportunity to record EEG unobtrusively in everyday life. However, in real-life, the EEG data quickly becomes difficult to interpret, as the neural signal is contaminated by other, non-neural signal contributions. Due to the small number of electrodes in ear-EEG devices, the interpretation of the EEG becomes even more difficult. For meaningful and reliable ear-EEG, it is crucial that the brain signals we wish to record in real life are well-understood and that we make optimal use of the available electrodes. Their placement should be guided by prior knowledge about the characteristics of the signal of interest.Objective.We want to understand the signal we record with ear-EEG and make recommendations on how to optimally place a limited number of electrodes.Approach.We built a high-density ear-EEG with 31 channels spaced densely around one ear. We used it to record four auditory event-related potentials (ERPs): the mismatch negativity, the P300, the N100 and the N400. With this data, we gain an understanding of how different stages of auditory processing are reflected in ear-EEG. We investigate the electrode configurations that carry the most information and use a mass univariate ERP analysis to identify the optimal channel configuration. We additionally use a multivariate approach to investigate the added value of multi-channel recordings.Main results.We find significant condition differences for all ERPs. The different ERPs vary considerably in their spatial extent and different electrode positions are necessary to optimally capture each component. In the multivariate analysis, we find that the investigation of the ERPs benefits strongly from multi-channel ear-EEG.Significance.Our work emphasizes the importance of a strong theoretical and practical background when building and using ear-EEG. We provide recommendations on finding the optimal electrode positions. These results will guide future research employing ear-EEG in real-life scenarios.


Assuntos
Eletroencefalografia , Potenciais Evocados , Humanos , Masculino , Feminino , Eletroencefalografia/métodos , Percepção Auditiva , Eletrodos , Encéfalo
2.
Front Neurosci ; 17: 895094, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37829725

RESUMO

Introduction: As our attention is becoming a commodity that an ever-increasing number of applications are competing for, investing in modern day tools and devices that can detect our mental states and protect them from outside interruptions holds great value. Mental fatigue and distractions are impacting our ability to focus and can cause workplace injuries. Electroencephalography (EEG) may reflect concentration, and if EEG equipment became wearable and inconspicuous, innovative brain-computer interfaces (BCI) could be developed to monitor mental load in daily life situations. The purpose of this study is to investigate the potential of EEG recorded inside and around the human ear to determine levels of attention and focus. Methods: In this study, mobile and wireless ear-EEG were concurrently recorded with conventional EEG (cap) systems to collect data during tasks related to focus: an N-back task to assess working memory and a mental arithmetic task to assess cognitive workload. The power spectral density (PSD) of the EEG signal was analyzed to isolate consistent differences between mental load conditions and classify epochs using step-wise linear discriminant analysis (swLDA). Results and discussion: Results revealed that spectral features differed statistically between levels of cognitive load for both tasks. Classification algorithms were tested on spectral features from twelve and two selected channels, for the cap and the ear-EEG. A two-channel ear-EEG model evaluated the performance of two dry in-ear electrodes specifically. Single-trial classification for both tasks revealed above chance-level accuracies for all subjects, with mean accuracies of: 96% (cap-EEG) and 95% (ear-EEG) for the twelve-channel models, 76% (cap-EEG) and 74% (in-ear-EEG) for the two-channel model for the N-back task; and 82% (cap-EEG) and 85% (ear-EEG) for the twelve-channel, 70% (cap-EEG) and 69% (in-ear-EEG) for the two-channel model for the arithmetic task. These results suggest that neural oscillations recorded with ear-EEG can be used to reliably differentiate between levels of cognitive workload and working memory, in particular when multi-channel recordings are available, and could, in the near future, be integrated into wearable devices.

3.
Eur J Neurosci ; 58(7): 3671-3685, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37612776

RESUMO

In everyday life, people differ in their sound perception and thus sound processing. Some people may be distracted by construction noise, while others do not even notice. With smartphone-based mobile ear-electroencephalography (ear-EEG), we can measure and quantify sound processing in everyday life by analysing presented sounds and also naturally occurring ones. Twenty-four participants completed four controlled conditions in the lab (1 h) and one condition in the office (3 h). All conditions used the same paired-click stimuli. In the lab, participants listened to click tones under four different instructions: no task towards the sounds, reading a newspaper article, listening to an audio article or counting a rare deviant sound. In the office recording, participants followed daily activities while they were sporadically presented with clicks, without any further instruction. In the beyond-the-lab condition, in addition to the presented sounds, environmental sounds were recorded as acoustic features (i.e., loudness, power spectral density and sounds onsets). We found task-dependent differences in the auditory event-related potentials (ERPs) to the presented click sounds in all lab conditions, which underline that neural processes related to auditory attention can be differentiated with ear-EEG. In the beyond-the-lab condition, we found ERPs comparable to some of the lab conditions. The N1 amplitude to the click sounds beyond the lab was dependent on the background noise, probably due to energetic masking. Contrary to our expectation, we did not find a clear ERP in response to the environmental sounds. Overall, we showed that smartphone-based ear-EEG can be used to study sound processing of well defined-stimuli in everyday life.

4.
J Vis Exp ; (193)2023 03 31.
Artigo em Inglês | MEDLINE | ID: mdl-37067277

RESUMO

The c-grid (ear-electroencephalography, sold under the name cEEGrid) is an unobtrusive and comfortable electrode array that can be used for investigating brain activity after affixing around the ear. The c-grid is suitable for use outside of the laboratory for long durations, even for the whole day. Various cognitive processes can be studied using these grids, as shown by previous research, including research beyond the lab. To record high-quality ear-EEG data, careful preparation is necessary. In this protocol, we explain the steps needed for its successful implementation. First, how to test the functionality of the grid prior to a recording is shown. Second, a description is provided on how to prepare the participant and how to fit the c-grid, which is the most important step for recording high-quality data. Third, an outline is provided on how to connect the grids to an amplifier and how to check the signal quality. In this protocol, we list best practice recommendations and tips that make c-grid recordings successful. If researchers follow this protocol, they are comprehensively equipped for experimenting with the c-grid both in and beyond the lab.


Assuntos
Amplificadores Eletrônicos , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Eletrodos , Sistemas Computacionais , Encéfalo
5.
J Neural Eng ; 19(2)2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35316801

RESUMO

Objective. Ear-EEG (electroencephalography) allows to record brain activity using only a few electrodes located close to the ear. Ear-EEG is comfortable and easy to apply, facilitating beyond-the-lab EEG recordings in everyday life. With the unobtrusive setup, a person wearing it can blend in, allowing unhindered EEG recordings in social situations. However, compared to classical cap-EEG, only a small part of the head is covered with electrodes. Most scalp positions that are known from established EEG research are not covered by ear-EEG electrodes, making the comparison between the two approaches difficult and might hinder the transition from cap-based lab studies to ear-based beyond-the-lab studies.Approach. We here provide a reference data-set comparing ear-EEG and cap-EEG directly for four different auditory event-related potentials (ERPs): N100, MMN, P300 and N400. We show how the ERPs are reflected when using only electrodes around the ears.Main results. We find that significant condition differences for all ERP-components could be recorded using only ear-electrodes. The effect sizes were moderate to high on the single subject level. Morphology and temporal evolution of signals recorded from around-the-ear resemble highly those from standard scalp-EEG positions. We found a reduction in effect size (signal loss) for the ear-EEG electrodes compared to cap-EEG of 21%-44%. The amount of signal loss depended on the ERP-component; we observed the lowest percentage signal loss for the N400 and the highest percentage signal loss for the N100. Our analysis further shows that no single channel position around the ear is optimal for recording all ERP-components or all participants, speaking in favor of multi-channel ear-EEG solutions.Significance. Our study provides reference results for future studies employing ear-EEG.


Assuntos
Eletroencefalografia , Potenciais Evocados , Orelha , Eletrodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Couro Cabeludo
6.
Front Neuroergon ; 3: 1062227, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38235454

RESUMO

Introduction: In demanding work situations (e.g., during a surgery), the processing of complex soundscapes varies over time and can be a burden for medical personnel. Here we study, using mobile electroencephalography (EEG), how humans process workplace-related soundscapes while performing a complex audio-visual-motor task (3D Tetris). Specifically, we wanted to know how the attentional focus changes the processing of the soundscape as a whole. Method: Participants played a game of 3D Tetris in which they had to use both hands to control falling blocks. At the same time, participants listened to a complex soundscape, similar to what is found in an operating room (i.e., the sound of machinery, people talking in the background, alarm sounds, and instructions). In this within-subject design, participants had to react to instructions (e.g., "place the next block in the upper left corner") and to sounds depending on the experimental condition, either to a specific alarm sound originating from a fixed location or to a beep sound that originated from varying locations. Attention to the alarm reflected a narrow attentional focus, as it was easy to detect and most of the soundscape could be ignored. Attention to the beep reflected a wide attentional focus, as it required the participants to monitor multiple different sound streams. Results and discussion: Results show the robustness of the N1 and P3 event related potential response during this dynamic task with a complex auditory soundscape. Furthermore, we used temporal response functions to study auditory processing to the whole soundscape. This work is a step toward studying workplace-related sound processing in the operating room using mobile EEG.

7.
Front Neurogenom ; 3: 793061, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38235458

RESUMO

With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (< 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.

8.
Front Digit Health ; 3: 688122, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34713159

RESUMO

A comfortable, discrete and robust recording of the sleep EEG signal at home is a desirable goal but has been difficult to achieve. We investigate how well flex-printed electrodes are suitable for sleep monitoring tasks in a smartphone-based home environment. The cEEGrid ear-EEG sensor has already been tested in the laboratory for measuring night sleep. Here, 10 participants slept at home and were equipped with a cEEGrid and a portable amplifier (mBrainTrain, Serbia). In addition, the EEG of Fpz, EOG_L and EOG_R was recorded. All signals were recorded wirelessly with a smartphone. On average, each participant provided data for M = 7.48 h. An expert sleep scorer created hypnograms and annotated grapho-elements according to AASM based on the EEG of Fpz, EOG_L and EOG_R twice, which served as the baseline agreement for further comparisons. The expert scorer also created hypnograms using bipolar channels based on combinations of cEEGrid channels only, and bipolar cEEGrid channels complemented by EOG channels. A comparison of the hypnograms based on frontal electrodes with the ones based on cEEGrid electrodes (κ = 0.67) and the ones based on cEEGrid complemented by EOG channels (κ = 0.75) both showed a substantial agreement, with the combination including EOG channels showing a significantly better outcome than the one without (p = 0.006). Moreover, signal excerpts of the conventional channels containing grapho-elements were correlated with those of the cEEGrid in order to determine the cEEGrid channel combination that optimally represents the annotated grapho-elements. The results show that the grapho-elements were well-represented by the front-facing electrode combinations. The correlation analysis of the grapho-elements resulted in an average correlation coefficient of 0.65 for the most suitable electrode configuration of the cEEGrid. The results confirm that sleep stages can be identified with electrodes placement around the ear. This opens up opportunities for miniaturized ear-EEG systems that may be self-applied by users.

9.
Front Hum Neurosci ; 15: 717810, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34588966

RESUMO

Interpersonal synchrony refers to the temporal coordination of actions between individuals and is a common feature of social behaviors, from team sport to ensemble music performance. Interpersonal synchrony of many rhythmic (periodic) behaviors displays dynamics of coupled biological oscillators. The current study addresses oscillatory dynamics on the levels of brain and behavior between music duet partners performing at spontaneous (uncued) rates. Wireless EEG was measured from N = 20 pairs of pianists as they performed a melody first in Solo performance (at their spontaneous rate of performance), and then in Duet performances at each partner's spontaneous rate. Influences of partners' spontaneous rates on interpersonal synchrony were assessed by correlating differences in partners' spontaneous rates of Solo performance with Duet tone onset asynchronies. Coupling between partners' neural oscillations was assessed by correlating amplitude envelope fluctuations of cortical oscillations at the Duet performance frequency between observed partners and between surrogate (re-paired) partners, who performed the same melody but at different times. Duet synchronization was influenced by partners' spontaneous rates in Solo performance. The size and direction of the difference in partners' spontaneous rates were mirrored in the size and direction of the Duet asynchronies. Moreover, observed Duet partners showed greater inter-brain correlations of oscillatory amplitude fluctuations than did surrogate partners, suggesting that performing in synchrony with a musical partner is reflected in coupled cortical dynamics at the performance frequency. The current study provides evidence that dynamics of oscillator coupling are reflected in both behavioral and neural measures of temporal coordination during musical joint action.

10.
Behav Res Methods ; 53(5): 2025-2036, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33721208

RESUMO

Most research investigating auditory perception is conducted in controlled laboratory settings, potentially restricting its generalizability to the complex acoustic environment outside the lab. The present study, in contrast, investigated auditory attention with long-term recordings (> 6 h) beyond the lab using a fully mobile, smartphone-based ear-centered electroencephalography (EEG) setup with minimal restrictions for participants. Twelve participants completed iterations of two variants of an oddball task where they had to react to target tones and to ignore standard tones. A rapid variant of the task (tones every 2 s, 5 min total time) was performed seated and with full focus in the morning, around noon and in the afternoon under controlled conditions. A sporadic variant (tones every minute, 160 min total time) was performed once in the morning and once in the afternoon while participants followed their normal office day routine. EEG data, behavioral data, and movement data (with a gyroscope) were recorded and analyzed. The expected increased amplitude of the P3 component in response to the target tone was observed for both the rapid and the sporadic oddball. Miss rates were lower and reaction times were faster in the rapid oddball compared to the sporadic one. The movement data indicated that participants spent most of their office day at relative rest. Overall, this study demonstrated that it is feasible to study auditory perception in everyday life with long-term ear-EEG.


Assuntos
Potenciais Evocados Auditivos , Potenciais Evocados , Estimulação Acústica , Atenção , Percepção Auditiva , Eletroencefalografia , Humanos , Tempo de Reação
11.
Mind Brain Educ ; 15(4): 354-370, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35875415

RESUMO

As the field of educational neuroscience continues to grow, questions have emerged regarding the ecological validity and applicability of this research to educational practice. Recent advances in mobile neuroimaging technologies have made it possible to conduct neuroscientific studies directly in naturalistic learning environments. We propose that embedding mobile neuroimaging research in a cycle (Matusz, Dikker, Huth, & Perrodin, 2019), involving lab-based, seminaturalistic, and fully naturalistic experiments, is well suited for addressing educational questions. With this review, we take a cautious approach, by discussing the valuable insights that can be gained from mobile neuroimaging technology, including electroencephalography and functional near-infrared spectroscopy, as well as the challenges posed by bringing neuroscientific methods into the classroom. Research paradigms used alongside mobile neuroimaging technology vary considerably. To illustrate this point, studies are discussed with increasingly naturalistic designs. We conclude with several ethical considerations that should be taken into account in this unique area of research.

12.
Brain Topogr ; 33(6): 665-676, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32833181

RESUMO

Ear-EEG allows to record brain activity in every-day life, for example to study natural behaviour or unhindered social interactions. Compared to conventional scalp-EEG, ear-EEG uses fewer electrodes and covers only a small part of the head. Consequently, ear-EEG will be less sensitive to some cortical sources. Here, we perform realistic electromagnetic simulations to compare cEEGrid ear-EEG with 128-channel cap-EEG. We compute the sensitivity of ear-EEG for different cortical sources, and quantify the expected signal loss of ear-EEG relative to cap-EEG. Our results show that ear-EEG is most sensitive to sources in the temporal cortex. Furthermore, we show how ear-EEG benefits from a multi-channel configuration (i.e. cEEGrid). The pipelines presented here can be adapted to any arrangement of electrodes and can therefore provide an estimate of sensitivity to cortical regions, thereby increasing the chance of successful experiments using ear-EEG.


Assuntos
Eletroencefalografia , Cabeça , Eletrodos , Humanos
13.
Front Neurosci ; 14: 603, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32612507

RESUMO

Listeners differ in their ability to attend to a speech stream in the presence of a competing sound. Differences in speech intelligibility in noise cannot be fully explained by the hearing ability which suggests the involvement of additional cognitive factors. A better understanding of the temporal fluctuations in the ability to pay selective auditory attention to a desired speech stream may help in explaining these variabilities. In order to better understand the temporal dynamics of selective auditory attention, we developed an online auditory attention decoding (AAD) processing pipeline based on speech envelope tracking in the electroencephalogram (EEG). Participants had to attend to one audiobook story while a second one had to be ignored. Online AAD was applied to track the attention toward the target speech signal. Individual temporal attention profiles were computed by combining an established AAD method with an adaptive staircase procedure. The individual decoding performance over time was analyzed and linked to behavioral performance as well as subjective ratings of listening effort, motivation, and fatigue. The grand average attended speaker decoding profile derived in the online experiment indicated performance above chance level. Parameters describing the individual AAD performance in each testing block indicated significant differences in decoding performance over time to be closely related to the behavioral performance in the selective listening task. Further, an exploratory analysis indicated that subjects with poor decoding performance reported higher listening effort and fatigue compared to good performers. Taken together our results show that online EEG based AAD in a complex listening situation is feasible. Adaptive attended speaker decoding profiles over time could be used as an objective measure of behavioral performance and listening effort. The developed online processing pipeline could also serve as a basis for future EEG based near real-time auditory neurofeedback systems.

14.
PLoS One ; 15(6): e0235083, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32579618

RESUMO

Cognitive flexibility is the ability to switch between different concepts or to adapt goal-directed behavior in a changing environment. Although, cognitive research on this ability has long been focused on the individual mind, it is becoming increasingly clear that cognitive flexibility plays a central role in our social life. This is particularly evident in turn-taking in verbal conversation, where cognitive flexibility of the individual becomes part of social flexibility in the dyadic interaction. In this work, we introduce a model that reveals different parameters that explain how people flexibly handle unexpected events in verbal conversation. In order to study hypotheses derived from the model, we use a novel experimental approach in which thirty pairs of participants engaged in a word-by-word interaction by taking turns in generating sentences word by word. Similar to well established individual cognitive tasks, participants needed to adapt their behavior in order to respond to their co-actor's last utterance. With our experimental approach we could manipulate the interaction between participants: Either both participants had to construct a sentence with a common target word (congruent condition) or with distinct target words (incongruent condition). We further studied the relation between the interactive Word-by-Word task measures and classical individual-centered, cognitive tasks, namely the Number-Letter task, the Stop-Signal task, and the GoNogo task. In the Word-by-Word task, we found that participants had faster response times in congruent compared to incongruent trials, which replicates the primary findings of standard cognitive tasks measuring cognitive flexibility. Further, we found a significant correlation between the performance in the Word-by-Word task and the Stop-Signal task indicating that participants with a high cognitive flexibility in the Word-by-Word task also showed high inhibition control.


Assuntos
Cognição/fisiologia , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Inteligibilidade da Fala/fisiologia , Adulto , Comunicação , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Percepção da Fala/fisiologia , Adulto Jovem
15.
Sci Rep ; 10(1): 5460, 2020 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-32214133

RESUMO

Our aim in the present study is to measure neural correlates during spontaneous interactive sentence production. We present a novel approach using the word-by-word technique from improvisational theatre, in which two speakers jointly produce one sentence. This paradigm allows the assessment of behavioural aspects, such as turn-times, and electrophysiological responses, such as event-related-potentials (ERPs). Twenty-five participants constructed a cued but spontaneous four-word German sentence together with a confederate, taking turns for each word of the sentence. In 30% of the trials, the confederate uttered an unexpected gender-marked article. To complete the sentence in a meaningful way, the participant had to detect the violation and retrieve and utter a new fitting response. We found significant increases in response times after unexpected words and - despite allowing unscripted language production and naturally varying speech material - successfully detected significant N400 and P600 ERP effects for the unexpected word. The N400 EEG activity further significantly predicted the response time of the subsequent turn. Our results show that combining behavioural and neuroscientific measures of verbal interactions while retaining sufficient experimental control is possible, and that this combination provides promising insights into the mechanisms of spontaneous spoken dialogue.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia , Percepção da Fala/fisiologia , Fala/fisiologia , Comportamento Verbal/fisiologia , Adulto , Potenciais Evocados/fisiologia , Feminino , Alemanha , Humanos , Idioma , Masculino , Tempo de Reação , Adulto Jovem
16.
Front Neurosci ; 13: 720, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31379479

RESUMO

Electroencephalography (EEG) data can be used to decode an attended speech source in normal-hearing (NH) listeners using high-density EEG caps, as well as around-the-ear EEG devices. The technology may find application in identifying the target speaker in a cocktail party like scenario and steer speech enhancement algorithms in cochlear implants (CIs). However, the worse spectral resolution and the electrical artifacts introduced by a CI may limit the applicability of this approach to CI users. The goal of this study was to investigate whether selective attention can be decoded in CI users using an around-the-ear EEG system (cEEGrid). The performances of high-density cap EEG recordings and cEEGrid EEG recordings were compared in a selective attention paradigm using an envelope tracking algorithm. Speech from two audio books was presented through insert earphones to NH listeners and via direct audio cable to the CI users. 10 NH listeners and 10 bilateral CI users participated in the study. Participants were instructed to attend to one out of the two concurrent speech streams while data were recorded by a 96-channel scalp EEG and an 18-channel cEEGrid setup simultaneously. Reconstruction performance was evaluated by means of parametric correlations between the reconstructed speech and both, the envelope of the attended and the unattended speech stream. Results confirm the feasibility to decode selective attention by means of single-trial EEG data in NH and CI users using a high-density EEG. All NH listeners and 9 out of 10 CI achieved high decoding accuracies. The cEEGrid was successful in decoding selective attention in 5 out of 10 NH listeners. The same result was obtained for CI users.

17.
Front Hum Neurosci ; 13: 141, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31105543

RESUMO

Artifact Subspace Reconstruction (ASR) is an adaptive method for the online or offline correction of artifacts comprising multichannel electroencephalography (EEG) recordings. It repeatedly computes a principal component analysis (PCA) on covariance matrices to detect artifacts based on their statistical properties in the component subspace. We adapted the existing ASR implementation by using Riemannian geometry for covariance matrix processing. EEG data that were recorded on smartphone in both outdoors and indoors conditions were used for evaluation (N = 27). A direct comparison between the original ASR and Riemannian ASR (rASR) was conducted for three performance measures: reduction of eye-blinks (sensitivity), improvement of visual-evoked potentials (VEPs) (specificity), and computation time (efficiency). Compared to ASR, our rASR algorithm performed favorably on all three measures. We conclude that rASR is suitable for the offline and online correction of multichannel EEG data acquired in laboratory and in field conditions.

18.
Front Hum Neurosci ; 13: 69, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30873015

RESUMO

Motor imagery neurofeedback training has been proposed as a potential add-on therapy for motor impairment after stroke, but not everyone benefits from it. Previous work has used white matter integrity to predict motor imagery neurofeedback aptitude in healthy young adults. We set out to test this approach with motor imagery neurofeedback that is closer to that used for stroke rehabilitation and in a sample whose age is closer to that of typical stroke patients. Using shrinkage linear discriminant analysis with fractional anisotropy values in 48 white matter regions as predictors, we predicted whether each participant in a sample of 21 healthy older adults (48-77 years old) was a good or a bad performer with 84.8% accuracy. However, the regions used for prediction in our sample differed from those identified previously, and previously suggested regions did not yield significant prediction in our sample. Including demographic and cognitive variables which may correlate with motor imagery neurofeedback performance and white matter structure as candidate predictors revealed an association with age but also led to loss of statistical significance and somewhat poorer prediction accuracy (69.6%). Our results suggest cast doubt on the feasibility of predicting the benefit of motor imagery neurofeedback from fractional anisotropy. At the very least, such predictions should be based on data collected using the same paradigm and with subjects whose characteristics match those of the target case as closely as possible.

19.
Brain Res ; 1716: 27-38, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28693821

RESUMO

Although music performance has been widely studied in the behavioural sciences, less work has addressed the underlying neural mechanisms, perhaps due to technical difficulties in acquiring high-quality neural data during tasks requiring natural motion. The advent of wireless electroencephalography (EEG) presents a solution to this problem by allowing for neural measurement with minimal motion artefacts. In the current study, we provide the first validation of a mobile wireless EEG system for capturing the neural dynamics associated with piano performance. First, we propose a novel method for synchronously recording music performance and wireless mobile EEG. Second, we provide results of several timing tests that characterize the timing accuracy of our system. Finally, we report EEG time domain and frequency domain results from N=40 pianists demonstrating that wireless EEG data capture the unique temporal signatures of musicians' performances with fine-grained precision and accuracy. Taken together, we demonstrate that mobile wireless EEG can be used to measure the neural dynamics of piano performance with minimal motion constraints. This opens many new possibilities for investigating the brain mechanisms underlying music performance.


Assuntos
Eletroencefalografia/instrumentação , Destreza Motora/fisiologia , Adulto , Encéfalo/fisiologia , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Música/psicologia , Desempenho Psicomotor/fisiologia , Tecnologia sem Fio/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...