Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 146
Filter
1.
Front Hum Neurosci ; 18: 1382959, 2024.
Article in English | MEDLINE | ID: mdl-38818032

ABSTRACT

Balancing is a very important skill, supporting many daily life activities. Cognitive-motor interference (CMI) dual-tasking paradigms have been established to identify the cognitive load of complex natural motor tasks, such as running and cycling. Here we used wireless, smartphone-recorded electroencephalography (EEG) and motion sensors while participants were either standing on firm ground or on a slackline, either performing an auditory oddball task (dual-task condition) or no task simultaneously (single-task condition). We expected a reduced amplitude and increased latency of the P3 event-related potential (ERP) component to target sounds for the complex balancing compared to the standing on ground condition, and a further decrease in the dual-task compared to the single-task balancing condition. Further, we expected greater postural sway during slacklining while performing the concurrent auditory attention task. Twenty young, experienced slackliners performed an auditory oddball task, silently counting rare target tones presented in a series of frequently occurring standard tones. Results revealed similar P3 topographies and morphologies during both movement conditions. Contrary to our predictions we observed neither significantly reduced P3 amplitudes, nor significantly increased latencies during slacklining. Unexpectedly, we found greater postural sway during slacklining with no additional task compared to dual-tasking. Further, we found a significant correlation between the participant's skill level and P3 latency, but not between skill level and P3 amplitude or postural sway. This pattern of results indicates an interference effect for less skilled individuals, whereas individuals with a high skill level may have shown a facilitation effect. Our study adds to the growing field of research demonstrating that ERPs obtained in uncontrolled, daily-life situations can provide meaningful results. We argue that the individual CMI effects on the P3 ERP reflects how demanding the balancing task is for untrained individuals, which draws on limited resources that are otherwise available for auditory attention processing. In future work, the analysis of concurrently recorded motion-sensor signals will help to identify the cognitive demands of motor tasks executed in natural, uncontrolled environments.

2.
Audiol Neurootol ; : 1-13, 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38679013

ABSTRACT

INTRODUCTION: Cochlear implant (CI) users differ greatly in their rehabilitation outcomes, including speech understanding in noise. This variability may be related to brain changes associated with intact senses recruiting cortical areas from stimulation-deprived senses. Numerous studies have demonstrated such cross-modal reorganization in individuals with untreated hearing loss. How it is affected by regular use of hearing devices remains unclear, however. To shed light on this, the current study measured cortical responses reflecting comprehension abilities in experienced CI users and normal-hearing controls. METHODS: Using multichannel electroencephalography, we tested CI users who had used their devices for at least 12 months and closely matched controls (N = 2 × 13). Cortical responses reflecting comprehension abilities - the N400 and late positive complex (LPC) components - were evoked using congruent and incongruent digit-triplet stimuli. The participants' task was to assess digit-triplet congruency by means of timed button presses. All measurements were performed in speech-shaped noise 15 dB above individually measured speech recognition thresholds. Three stimulus presentation modes were used: auditory-only, visual-only, and visual-then-auditory. RESULTS: The analyses revealed no group differences in the N400 and LPC responses. In terms of response times, the CI users were slower and differentially affected by the three stimulus presentation modes relative to the controls. CONCLUSION: Compared to normal-hearing controls, experienced CI users may need more time to comprehend speech in noise. Response times can serve as a proxy for speech comprehension by CI users.

3.
Brain Commun ; 5(6): fcad327, 2023.
Article in English | MEDLINE | ID: mdl-38130839

ABSTRACT

Adaptive control has been studied in Parkinson's disease mainly in the context of proactive control and with mixed results. We compared reactive- and proactive control in 30 participants with Parkinson's disease to 30 age matched healthy control participants. The electroencephalographic activity of the participants was recorded over 128 channels while they performed a numerical Stroop task, in which we controlled for confounding stimulus-response learning. We assessed effects of reactive- and proactive control on reaction time-, accuracy- and electroencephalographic time-frequency data. Behavioural results show distinct impairments of proactive- and reactive control in participants with Parkinson's disease, when tested on their usual medication. Compared to healthy control participants, participants with Parkinson's disease were impaired in their ability to adapt cognitive control proactively and were less effective to resolve conflict using reactive control. Successful reactive and proactive control in the healthy control group was accompanied by a reduced conflict effect between congruent and incongruent items in midline-frontal theta power. Our findings provide evidence for a general impairment of proactive control in Parkinson's disease and highlight the importance of controlling for the effects of S-R learning when studying adaptive control. Evidence concerning reactive control was inconclusive, but we found that participants with Parkinson's disease were less effective than healthy control participants in resolving conflict during the reactive control task.

4.
Int J Audiol ; : 1-10, 2023 Nov 27.
Article in English | MEDLINE | ID: mdl-38010629

ABSTRACT

OBJECTIVE: To explore if experience with hearing aid (HA) amplification affects speech-evoked cortical potentials reflecting comprehension abilities. DESIGN: N400 and late positive complex (LPC) responses as well as behavioural response times to congruent and incongruent digit triplets were measured. The digits were presented against stationary speech-shaped noise 10 dB above individually measured speech recognition thresholds. Stimulus presentation was either acoustic (digits 1-3) or first visual (digits 1-2) and then acoustic (digit 3). STUDY SAMPLE: Three groups of older participants (N = 3 × 15) with (1) pure-tone average hearing thresholds <25 dB HL from 500-4000 Hz, (2) mild-to-moderate sensorineural hearing loss (SNHL) but no prior HA experience, and (3) mild-to-moderate SNHL and >2 years of HA experience. Groups 2-3 were fitted with test devices in accordance with clinical gain targets. RESULTS: No group differences were found in the electrophysiological data. N400 amplitudes were larger and LPC latencies shorter with acoustic presentation. For group 1, behavioural response times were shorter with visual-then-acoustic presentation. CONCLUSION: When speech audibility is ensured, comprehension-related electrophysiological responses appear intact in individuals with mild-to-moderate SNHL, regardless of prior experience with amplified sound. Further research into the effects of audibility versus acclimatisation-related neurophysiological changes is warranted.

5.
Front Neurosci ; 17: 895094, 2023.
Article in English | MEDLINE | ID: mdl-37829725

ABSTRACT

Introduction: As our attention is becoming a commodity that an ever-increasing number of applications are competing for, investing in modern day tools and devices that can detect our mental states and protect them from outside interruptions holds great value. Mental fatigue and distractions are impacting our ability to focus and can cause workplace injuries. Electroencephalography (EEG) may reflect concentration, and if EEG equipment became wearable and inconspicuous, innovative brain-computer interfaces (BCI) could be developed to monitor mental load in daily life situations. The purpose of this study is to investigate the potential of EEG recorded inside and around the human ear to determine levels of attention and focus. Methods: In this study, mobile and wireless ear-EEG were concurrently recorded with conventional EEG (cap) systems to collect data during tasks related to focus: an N-back task to assess working memory and a mental arithmetic task to assess cognitive workload. The power spectral density (PSD) of the EEG signal was analyzed to isolate consistent differences between mental load conditions and classify epochs using step-wise linear discriminant analysis (swLDA). Results and discussion: Results revealed that spectral features differed statistically between levels of cognitive load for both tasks. Classification algorithms were tested on spectral features from twelve and two selected channels, for the cap and the ear-EEG. A two-channel ear-EEG model evaluated the performance of two dry in-ear electrodes specifically. Single-trial classification for both tasks revealed above chance-level accuracies for all subjects, with mean accuracies of: 96% (cap-EEG) and 95% (ear-EEG) for the twelve-channel models, 76% (cap-EEG) and 74% (in-ear-EEG) for the two-channel model for the N-back task; and 82% (cap-EEG) and 85% (ear-EEG) for the twelve-channel, 70% (cap-EEG) and 69% (in-ear-EEG) for the two-channel model for the arithmetic task. These results suggest that neural oscillations recorded with ear-EEG can be used to reliably differentiate between levels of cognitive workload and working memory, in particular when multi-channel recordings are available, and could, in the near future, be integrated into wearable devices.

6.
Sci Rep ; 13(1): 3259, 2023 02 24.
Article in English | MEDLINE | ID: mdl-36828878

ABSTRACT

Turn-taking is a feature of many social interactions such as group music-making, where partners must alternate turns with high precision and accuracy. In two studies of musical rhythm coordination, we investigated how joint action partners learn to coordinate the timing of turn-taking. Musically inexperienced individuals learned to tap at the rate of a pacing cue individually or jointly (in turn with a partner), where each tap produced the next tone in a melodic sequence. In Study 1, partners alternated turns every tap, whereas in Study 2 partners alternated turns every two taps. Findings revealed that partners did not achieve the same level of performance accuracy or precision of inter-tap intervals (ITIs) when producing tapping sequences jointly relative to individually, despite showing learning (increased ITI accuracy and precision across the experiment) in both tasks. Strikingly, partners imposed rhythmic patterns onto jointly produced sequences that captured the temporal structure of turns. Together, learning to produce novel temporal sequences in turn with a partner appears to be more challenging than learning to produce the same sequences alone. Critically, partners may impose rhythmic structures onto turn-taking sequences as a strategy for facilitating coordination.


Subject(s)
Music , Time Perception , Humans , Periodicity , Learning
7.
Data Brief ; 46: 108847, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36687153

ABSTRACT

This article describes a dataset from one standing and two outdoor walking tasks. Each task was performed by the same 18 participants twice, using foot accelerometers and two different EEG system configurations. The first task was a brief eyes open/eyes closed task. The second task was a six minute auditory oddball task performed in three conditions: Standing, walking alone and walking next to an experimenter. In the third task, the participants walked with the experimenter in three conditions: With their view of the experimenter blocked, walking naturally, and trying to synchronize their steps with the experimenter. During all walking conditions which included the experimenter, the experimenter walked following a headphone metronome to keep their steps consistent, also wearing a foot accelerometer. All tasks were performed twice on two separate days, using active electrode and passive electrode EEG configurations (Brain Products, GmbH). Data was used for Scanlon et al. (2021) and Scanlon et al. (2022), and could be used for learning about attention, walking mechanisms and social neuroscience. Scanlon, J. E., Jacobsen, N. S. J., Maack, M. C., & Debener, S. (2021). Does the electrode amplification style matter? A comparison of active and passive EEG system configurations during standing and walking. European Journal of Neuroscience, 54(12), 8381-8395. Scanlon, J. E. M., Jacobsen, N. S. J., Maack, M. C., & Debener, S. (2022). Stepping in time: Alpha-mu and beta oscillations during a walking synchronization task. NeuroImage, 253, 119099.

8.
Trends Hear ; 26: 23312165221139733, 2022.
Article in English | MEDLINE | ID: mdl-36423251

ABSTRACT

Effective communication requires good speech perception abilities. Speech perception can be assessed with behavioral and electrophysiological methods. Relating these two types of measures to each other can provide a basis for new clinical tests. In audiological practice, speech detection and discrimination are routinely assessed, whereas comprehension-related aspects are ignored. The current study compared behavioral and electrophysiological measures of speech detection, discrimination, and comprehension. Thirty young normal-hearing native Danish speakers participated. All measurements were carried out with digits and stationary speech-shaped noise as the stimuli. The behavioral measures included speech detection thresholds (SDTs), speech recognition thresholds (SRTs), and speech comprehension scores (i.e., response times). For the electrophysiological measures, multichannel electroencephalography (EEG) recordings were performed. N100 and P300 responses were evoked using an active auditory oddball paradigm. N400 and Late Positive Complex (LPC) responses were evoked using a paradigm based on congruent and incongruent digit triplets, with the digits presented either all acoustically or first visually (digits 1-2) and then acoustically (digit 3). While no correlations between the SDTs and SRTs and the N100 and P300 responses were found, the response times were correlated with the EEG responses to the congruent and incongruent triplets. Furthermore, significant differences between the response times (but not EEG responses) obtained with auditory and visual-then-auditory stimulus presentation were observed. This pattern of results could reflect a faster recall mechanism when the first two digits are presented visually rather than acoustically. The visual-then-auditory condition may facilitate the assessment of comprehension-related processes in hard-of-hearing individuals.


Subject(s)
Speech Perception , Speech , Humans , Female , Male , Comprehension , Electroencephalography , Evoked Potentials
9.
Front Sports Act Living ; 4: 945341, 2022.
Article in English | MEDLINE | ID: mdl-36275441

ABSTRACT

Walking on natural terrain while performing a dual-task, such as typing on a smartphone is a common behavior. Since dual-tasking and terrain change gait characteristics, it is of interest to understand how altered gait is reflected by changes in gait-associated neural signatures. A study was performed with 64-channel electroencephalography (EEG) of healthy volunteers, which was recorded while they walked over uneven and even terrain outdoors with and without performing a concurrent task (self-paced button pressing with both thumbs). Data from n = 19 participants (M = 24 years, 13 females) were analyzed regarding gait-phase related power modulations (GPM) and gait performance (stride time and stride time-variability). GPMs changed significantly with terrain, but not with the task. Descriptively, a greater beta power decrease following right-heel strikes was observed on uneven compared to even terrain. No evidence of an interaction was observed. Beta band power reduction following the initial contact of the right foot was more pronounced on uneven than on even terrain. Stride times were longer on uneven compared to even terrain and during dual- compared to single-task gait, but no significant interaction was observed. Stride time variability increased on uneven terrain compared to even terrain but not during single- compared to dual-tasking. The results reflect that as the terrain difficulty increases, the strides become slower and more irregular, whereas a secondary task slows stride duration only. Mobile EEG captures GPM differences linked to terrain changes, suggesting that the altered gait control demands and associated cortical processes can be identified. This and further studies may help to lay the foundation for protocols assessing the cognitive demand of natural gait on the motor system.

10.
Front Neurosci ; 16: 904003, 2022.
Article in English | MEDLINE | ID: mdl-36117630

ABSTRACT

Recent advancements in neuroscientific research and miniaturized ear-electroencephalography (EEG) technologies have led to the idea of employing brain signals as additional input to hearing aid algorithms. The information acquired through EEG could potentially be used to control the audio signal processing of the hearing aid or to monitor communication-related physiological factors. In previous work, we implemented a research platform to develop methods that utilize EEG in combination with a hearing device. The setup combines currently available mobile EEG hardware and the so-called Portable Hearing Laboratory (PHL), which can fully replicate a complete hearing aid. Audio and EEG data are synchronized using the Lab Streaming Layer (LSL) framework. In this study, we evaluated the setup in three scenarios focusing particularly on the alignment of audio and EEG data. In Scenario I, we measured the latency between software event markers and actual audio playback of the PHL. In Scenario II, we measured the latency between an analog input signal and the sampled data stream of the EEG system. In Scenario III, we measured the latency in the whole setup as it would be used in a real EEG experiment. The results of Scenario I showed a jitter (standard deviation of trial latencies) of below 0.1 ms. The jitter in Scenarios II and III was around 3 ms in both cases. The results suggest that the increased jitter compared to Scenario I can be attributed to the EEG system. Overall, the findings show that the measurement setup can time-accurately present acoustic stimuli while generating LSL data streams over multiple hours of playback. Further, the setup can capture the audio and EEG LSL streams with sufficient temporal accuracy to extract event-related potentials from EEG signals. We conclude that our setup is suitable for studying closed-loop EEG & audio applications for future hearing aids.

11.
Front Neurosci ; 16: 883966, 2022.
Article in English | MEDLINE | ID: mdl-35812225

ABSTRACT

The need for diagnostic capabilities for sleep disorders such as sleep apnea and insomnia far exceeds the capacity of inpatient sleep laboratories. Some home monitoring systems omit electroencephalography (EEG) because trained personnel may be needed to apply EEG sensors. Since EEG is essential for the detailed evaluation of sleep, better systems supporting the convenient and robust recording of sleep EEG at home are desirable. Recent advances in EEG acquisition with flex-printed sensors promise easier application of EEG sensor arrays for chronic recordings, yet these sensor arrays were not designed for sleep EEG. Here we explored the self-applicability of a new sleep EEG sensor array (trEEGrid) without prior training. We developed a prototype with pre-gelled neonatal ECG electrodes placed on a self-adhesive grid shape that guided the fast and correct positioning of a total of nine electrodes on the face and around the ear. Positioning of the sensors was based on the results of a previous ear-EEG sleep study (da Silva Souto et al., 2021), and included electrodes around the ear, one eye, and the chin. For comparison, EEG and electrooculogram channels placed according to the American Academy of Sleep Medicine criteria, as well as respiratory inductance plethysmography on thorax and abdomen, oxygen saturation, pulse and body position were included with a mobile polysomnography (PSG) system. Two studies with 32 individuals were conducted to compare the signal quality of the proposed flex-printed grid with PSG signals and to explore self-application of the new grid at home. Results indicate that the new array is self-applicable by healthy participants without on-site hands-on support. A comparison of the hypnogram annotations obtained from the data of both systems revealed an overall substantial agreement on a group level (Cohen's κ = 0.70 ± 0.01). These results suggest that flex-printed pre-gelled sensor arrays designed for sleep EEG acquisition can facilitate self-recording at home.

12.
Front Neurosci ; 16: 869426, 2022.
Article in English | MEDLINE | ID: mdl-35592265

ABSTRACT

Auditory attention is an important cognitive function used to separate relevant from irrelevant auditory information. However, most findings on attentional selection have been obtained in highly controlled laboratory settings using bulky recording setups and unnaturalistic stimuli. Recent advances in electroencephalography (EEG) facilitate the measurement of brain activity outside the laboratory, and around-the-ear sensors such as the cEEGrid promise unobtrusive acquisition. In parallel, methods such as speech envelope tracking, intersubject correlations and spectral entropy measures emerged which allow us to study attentional effects in the neural processing of natural, continuous auditory scenes. In the current study, we investigated whether these three attentional measures can be reliably obtained when using around-the-ear EEG. To this end, we analyzed the cEEGrid data of 36 participants who attended to one of two simultaneously presented speech streams. Speech envelope tracking results confirmed a reliable identification of the attended speaker from cEEGrid data. The accuracies in identifying the attended speaker increased when fitting the classification model to the individual. Artifact correction of the cEEGrid data with artifact subspace reconstruction did not increase the classification accuracy. Intersubject correlations were higher for those participants attending to the same speech stream than for those attending to different speech streams, replicating previously obtained results with high-density cap-EEG. We also found that spectral entropy decreased over time, possibly reflecting the decrease in the listener's level of attention. Overall, these results support the idea of using ear-EEG measurements to unobtrusively monitor auditory attention to continuous speech. This knowledge may help to develop assistive devices that support listeners separating relevant from irrelevant information in complex auditory environments.

13.
Sci Rep ; 12(1): 3570, 2022 03 04.
Article in English | MEDLINE | ID: mdl-35246563

ABSTRACT

Compared to functional magnetic resonance imaging (fMRI), functional near infrared spectroscopy (fNIRS) has several advantages that make it particularly interesting for neurofeedback (NFB). A pre-requisite for NFB applications is that with fNIRS, signals from the brain region of interest can be measured. This study focused on the supplementary motor area (SMA). Healthy older participants (N = 16) completed separate continuous-wave (CW-) fNIRS and (f)MRI sessions. Data were collected for executed and imagined hand movements (motor imagery, MI), and for MI of whole body movements. Individual anatomical data were used to (i) define the regions of interest for fMRI analysis, to (ii) extract the fMRI BOLD response from the cortical regions corresponding to the fNIRS channels, and (iii) to select fNIRS channels. Concentration changes in oxygenated ([Formula: see text]) and deoxygenated ([Formula: see text]) hemoglobin were considered in the analyses. Results revealed subtle differences between the different MI tasks, indicating that for whole body MI movements as well as for MI of hand movements [Formula: see text] is the more specific signal. Selection of the fNIRS channel set based on individual anatomy did not improve the results. Overall, the study indicates that in terms of spatial specificity and task sensitivity SMA activation can be reliably measured with CW-fNIRS.


Subject(s)
Motor Cortex , Neurofeedback , Brain Mapping , Humans , Magnetic Resonance Imaging/methods , Motor Cortex/diagnostic imaging , Motor Cortex/physiology , Neurofeedback/physiology , Spectroscopy, Near-Infrared/methods
14.
Front Neurogenom ; 3: 793061, 2022.
Article in English | MEDLINE | ID: mdl-38235458

ABSTRACT

With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (< 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.

15.
Sensors (Basel) ; 21(23)2021 Dec 05.
Article in English | MEDLINE | ID: mdl-34884139

ABSTRACT

The streaming and recording of smartphone sensor signals is desirable for mHealth, telemedicine, environmental monitoring and other applications. Time series data gathered in these fields typically benefit from the time-synchronized integration of different sensor signals. However, solutions required for this synchronization are mostly available for stationary setups. We hope to contribute to the important emerging field of portable data acquisition by presenting open-source Android applications both for the synchronized streaming (Send-a) and recording (Record-a) of multiple sensor data streams. We validate the applications in terms of functionality, flexibility and precision in fully mobile setups and in hybrid setups combining mobile and desktop hardware. Our results show that the fully mobile solution is equivalent to well-established desktop versions. With the streaming application Send-a and the recording application Record-a, purely smartphone-based setups for mobile research and personal health settings can be realized on off-the-shelf Android devices.


Subject(s)
Mobile Applications , Telemedicine , Smartphone , Time Factors
16.
Front Hum Neurosci ; 15: 734231, 2021.
Article in English | MEDLINE | ID: mdl-34776906

ABSTRACT

When multiple sound sources are present at the same time, auditory perception is often challenged with disentangling the resulting mixture and focusing attention on the target source. It has been repeatedly demonstrated that background (distractor) sound sources are easier to ignore when their spectrotemporal signature is predictable. Prior evidence suggests that this ability to exploit predictability for foreground-background segregation degrades with age. On a theoretical level, this has been related with an impairment in elderly adults' capabilities to detect certain types of sensory deviance in unattended sound sequences. Yet the link between those two capacities, deviance detection and predictability-based sound source segregation, has not been empirically demonstrated. Here we report on a combined behavioral-EEG study investigating the ability of elderly listeners (60-75 years of age) to use predictability as a cue for sound source segregation, as well as their sensory deviance detection capacities. Listeners performed a detection task on a target stream that can only be solved when a concurrent distractor stream is successfully ignored. We contrast two conditions whose distractor streams differ in their predictability. The ability to benefit from predictability was operationalized as performance difference between the two conditions. Results show that elderly listeners can use predictability for sound source segregation at group level, yet with a high degree of inter-individual variation in this ability. In a further, passive-listening control condition, we measured correlates of deviance detection in the event-related brain potential (ERP) elicited by occasional deviations from the same spectrotemporal pattern as used for the predictable distractor sequence during the behavioral task. ERP results confirmed neural signatures of deviance detection in terms of mismatch negativity (MMN) at group level. Correlation analyses at single-subject level provide no evidence for the hypothesis that deviance detection ability (measured by MMN amplitude) is related to the ability to benefit from predictability for sound source segregation. These results are discussed in the frameworks of sensory deviance detection and predictive coding.

17.
Front Digit Health ; 3: 688122, 2021.
Article in English | MEDLINE | ID: mdl-34713159

ABSTRACT

A comfortable, discrete and robust recording of the sleep EEG signal at home is a desirable goal but has been difficult to achieve. We investigate how well flex-printed electrodes are suitable for sleep monitoring tasks in a smartphone-based home environment. The cEEGrid ear-EEG sensor has already been tested in the laboratory for measuring night sleep. Here, 10 participants slept at home and were equipped with a cEEGrid and a portable amplifier (mBrainTrain, Serbia). In addition, the EEG of Fpz, EOG_L and EOG_R was recorded. All signals were recorded wirelessly with a smartphone. On average, each participant provided data for M = 7.48 h. An expert sleep scorer created hypnograms and annotated grapho-elements according to AASM based on the EEG of Fpz, EOG_L and EOG_R twice, which served as the baseline agreement for further comparisons. The expert scorer also created hypnograms using bipolar channels based on combinations of cEEGrid channels only, and bipolar cEEGrid channels complemented by EOG channels. A comparison of the hypnograms based on frontal electrodes with the ones based on cEEGrid electrodes (κ = 0.67) and the ones based on cEEGrid complemented by EOG channels (κ = 0.75) both showed a substantial agreement, with the combination including EOG channels showing a significantly better outcome than the one without (p = 0.006). Moreover, signal excerpts of the conventional channels containing grapho-elements were correlated with those of the cEEGrid in order to determine the cEEGrid channel combination that optimally represents the annotated grapho-elements. The results show that the grapho-elements were well-represented by the front-facing electrode combinations. The correlation analysis of the grapho-elements resulted in an average correlation coefficient of 0.65 for the most suitable electrode configuration of the cEEGrid. The results confirm that sleep stages can be identified with electrodes placement around the ear. This opens up opportunities for miniaturized ear-EEG systems that may be self-applied by users.

18.
Front Hum Neurosci ; 15: 717810, 2021.
Article in English | MEDLINE | ID: mdl-34588966

ABSTRACT

Interpersonal synchrony refers to the temporal coordination of actions between individuals and is a common feature of social behaviors, from team sport to ensemble music performance. Interpersonal synchrony of many rhythmic (periodic) behaviors displays dynamics of coupled biological oscillators. The current study addresses oscillatory dynamics on the levels of brain and behavior between music duet partners performing at spontaneous (uncued) rates. Wireless EEG was measured from N = 20 pairs of pianists as they performed a melody first in Solo performance (at their spontaneous rate of performance), and then in Duet performances at each partner's spontaneous rate. Influences of partners' spontaneous rates on interpersonal synchrony were assessed by correlating differences in partners' spontaneous rates of Solo performance with Duet tone onset asynchronies. Coupling between partners' neural oscillations was assessed by correlating amplitude envelope fluctuations of cortical oscillations at the Duet performance frequency between observed partners and between surrogate (re-paired) partners, who performed the same melody but at different times. Duet synchronization was influenced by partners' spontaneous rates in Solo performance. The size and direction of the difference in partners' spontaneous rates were mirrored in the size and direction of the Duet asynchronies. Moreover, observed Duet partners showed greater inter-brain correlations of oscillatory amplitude fluctuations than did surrogate partners, suggesting that performing in synchrony with a musical partner is reflected in coupled cortical dynamics at the performance frequency. The current study provides evidence that dynamics of oscillator coupling are reflected in both behavioral and neural measures of temporal coordination during musical joint action.

19.
Front Neurosci ; 15: 685774, 2021.
Article in English | MEDLINE | ID: mdl-34194296

ABSTRACT

Several solutions have been proposed to study the relationship between ongoing brain activity and natural sensory stimuli, such as running speech. Computing the intersubject correlation (ISC) has been proposed as one possible approach. Previous evidence suggests that ISCs between the participants' electroencephalogram (EEG) may be modulated by attention. The current study addressed this question in a competing-speaker paradigm, where participants (N = 41) had to attend to one of two concurrently presented speech streams. ISCs between participants' EEG were higher for participants attending to the same story compared to participants attending to different stories. Furthermore, we found that ISCs between individual and group data predicted whether an individual attended to the left or right speech stream. Interestingly, the magnitude of the shared neural response with others attending to the same story was related to the individual neural representation of the attended and ignored speech envelope. Overall, our findings indicate that ISC differences reflect the magnitude of selective attentional engagement to speech.

20.
Front Neurosci ; 15: 643705, 2021.
Article in English | MEDLINE | ID: mdl-33828451

ABSTRACT

Difficulties in selectively attending to one among several speakers have mainly been associated with the distraction caused by ignored speech. Thus, in the current study, we investigated the neural processing of ignored speech in a two-competing-speaker paradigm. For this, we recorded the participant's brain activity using electroencephalography (EEG) to track the neural representation of the attended and ignored speech envelope. To provoke distraction, we occasionally embedded the participant's first name in the ignored speech stream. Retrospective reports as well as the presence of a P3 component in response to the name indicate that participants noticed the occurrence of their name. As predicted, the neural representation of the ignored speech envelope increased after the name was presented therein, suggesting that the name had attracted the participant's attention. Interestingly, in contrast to our hypothesis, the neural tracking of the attended speech envelope also increased after the name occurrence. On this account, we conclude that the name might not have primarily distracted the participants, at most for a brief duration, but that it alerted them to focus to their actual task. These observations remained robust even when the sound intensity of the ignored speech stream, and thus the sound intensity of the name, was attenuated.

SELECTION OF CITATIONS
SEARCH DETAIL
...