Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Ear Hear ; 41(6): 1635-1647, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33136638

RESUMEN

OBJECTIVE: Top-down spatial attention is effective at selecting a target sound from a mixture. However, nonspatial features often distinguish sources in addition to location. This study explores whether redundant nonspatial features are used to maintain selective auditory attention for a spatially defined target. DESIGN: We recorded electroencephalography while subjects focused attention on one of three simultaneous melodies. In one experiment, subjects (n = 17) were given an auditory cue indicating both the location and pitch of the target melody. In a second experiment (n = 17 subjects), the cue only indicated target location, and we compared two conditions: one in which the pitch separation of competing melodies was large, and one in which this separation was small. RESULTS: In both experiments, responses evoked by onsets of events in sound streams were modulated by attention, and we found no significant difference in this modulation between small and large pitch separation conditions. Therefore, the evoked response reflected that target stimuli were the focus of attention, and distractors were suppressed successfully for all experimental conditions. In all cases, parietal alpha was lateralized following the cue, but before melody onset, indicating that subjects initially focused attention in space. During the stimulus presentation, this lateralization disappeared when pitch cues were strong but remained significant when pitch cues were weak, suggesting that strong pitch cues reduced reliance on sustained spatial attention. CONCLUSIONS: These results demonstrate that once a well-defined target stream at a known location is selected, top-down spatial attention plays a weak role in filtering out a segregated competing stream.


Asunto(s)
Atención , Localización de Sonidos , Estimulación Acústica , Percepción Auditiva , Señales (Psicología) , Electroencefalografía , Humanos
2.
Psychol Res ; 78(3): 349-60, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24633644

RESUMEN

Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Enmascaramiento Perceptual/fisiología , Voz , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Adulto Joven
3.
Sci Rep ; 14(1): 13039, 2024 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844793

RESUMEN

Sleep onset insomnia is a pervasive problem that contributes significantly to the poor health outcomes associated with insufficient sleep. Auditory stimuli phase-locked to slow-wave sleep oscillations have been shown to augment deep sleep, but it is unknown whether a similar approach can be used to accelerate sleep onset. The present randomized controlled crossover trial enrolled adults with objectively verified sleep onset latencies (SOLs) greater than 30 min to test the effect of auditory stimuli delivered at specific phases of participants' alpha oscillations prior to sleep onset. During the intervention week, participants wore an electroencephalogram (EEG)-enabled headband that delivered acoustic pulses timed to arrive anti-phase with alpha for 30 min (Stimulation). During the Sham week, the headband silently recorded EEG. The primary outcome was SOL determined by blinded scoring of EEG records. For the 21 subjects included in the analyses, stimulation had a significant effect on SOL according to a linear mixed effects model (p = 0.0019), and weekly average SOL decreased by 10.5 ± 15.9 min (29.3 ± 44.4%). These data suggest that phase-locked acoustic stimulation can be a viable alternative to pharmaceuticals to accelerate sleep onset in individuals with prolonged sleep onset latencies. Trial Registration: This trial was first registered on clinicaltrials.gov on 24/02/2023 under the name Sounds Locked to ElectroEncephalogram Phase For the Acceleration of Sleep Onset Time (SLEEPFAST), and assigned registry number NCT05743114.


Asunto(s)
Estimulación Acústica , Electroencefalografía , Trastornos del Inicio y del Mantenimiento del Sueño , Humanos , Masculino , Femenino , Adulto , Trastornos del Inicio y del Mantenimiento del Sueño/terapia , Trastornos del Inicio y del Mantenimiento del Sueño/fisiopatología , Estimulación Acústica/métodos , Persona de Mediana Edad , Estudios Cruzados , Resultado del Tratamiento , Ritmo alfa/fisiología
4.
eNeuro ; 11(6)2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38834300

RESUMEN

Following repetitive visual stimulation, post hoc phase analysis finds that visually evoked response magnitudes vary with the cortical alpha oscillation phase that temporally coincides with sensory stimulus. This approach has not successfully revealed an alpha phase dependence for auditory evoked or induced responses. Here, we test the feasibility of tracking alpha with scalp electroencephalogram (EEG) recordings and play sounds phase-locked to individualized alpha phases in real-time using a novel end-point corrected Hilbert transform (ecHT) algorithm implemented on a research device. Based on prior work, we hypothesize that sound-evoked and induced responses vary with the alpha phase at sound onset and the alpha phase that coincides with the early sound-evoked response potential (ERP) measured with EEG. Thus, we use each subject's individualized alpha frequency (IAF) and individual auditory ERP latency to define target trough and peak alpha phases that allow an early component of the auditory ERP to align to the estimated poststimulus peak and trough phases, respectively. With this closed-loop and individualized approach, we find opposing alpha phase-dependent effects on the auditory ERP and alpha oscillations that follow stimulus onset. Trough and peak phase-locked sounds result in distinct evoked and induced post-stimulus alpha level and frequency modulations. Though additional studies are needed to localize the sources underlying these phase-dependent effects, these results suggest a general principle for alpha phase-dependence of sensory processing that includes the auditory system. Moreover, this study demonstrates the feasibility of using individualized neurophysiological indices to deliver automated, closed-loop, phase-locked auditory stimulation.


Asunto(s)
Estimulación Acústica , Ritmo alfa , Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Estimulación Acústica/métodos , Potenciales Evocados Auditivos/fisiología , Masculino , Femenino , Electroencefalografía/métodos , Ritmo alfa/fisiología , Adulto , Adulto Joven , Encéfalo/fisiología , Percepción Auditiva/fisiología , Algoritmos , Estudios de Factibilidad
5.
J Neural Eng ; 20(5)2023 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-37726002

RESUMEN

Objective.Healthy sleep plays a critical role in general well-being. Enhancement of slow-wave sleep by targeting acoustic stimuli to particular phases of delta (0.5-2 Hz) waves has shown promise as a non-invasive approach to improve sleep quality. Closed-loop stimulation during other sleep phases targeting oscillations at higher frequencies such as theta (4-7 Hz) or alpha (8-12 Hz) could be another approach to realize additional health benefits. However, systems to track and deliver stimulation relative to the instantaneous phase of electroencephalogram (EEG) signals at these higher frequencies have yet to be demonstrated outside of controlled laboratory settings.Approach.Here we examine the feasibility of using an endpoint-corrected version of the Hilbert transform (ecHT) algorithm implemented on a headband wearable device to measure alpha phase and deliver phase-locked auditory stimulation during the transition from wakefulness to sleep, during which alpha power is greatest. First, the ecHT algorithm is implementedin silicoto evaluate the performance characteristics of this algorithm across a range of sleep-related oscillatory frequencies. Secondly, a pilot sleep study tests feasibility to use the wearable device by users in the home setting for measurement of EEG activity during sleep and delivery of real-time phase-locked stimulation.Main results.The ecHT is capable of computing the instantaneous phase of oscillating signals with high precision, allowing auditory stimulation to be delivered at the intended phases of neural oscillations with low phase error. The wearable system was capable of measuring sleep-related neural activity with sufficient fidelity for sleep stage scoring during the at-home study, and phase-tracking performance matched simulated results. Users were able to successfully operate the system independently using the companion smartphone app to collect data and administer stimulation, and presentation of auditory stimuli during sleep initiation did not negatively impact sleep onset.Significance.This study demonstrates the feasibility of closed-loop real-time tracking and neuromodulation of a range of sleep-related oscillations using a wearable EEG device. Preliminary results suggest that this approach could be used to deliver non-invasive neuromodulation across all phases of sleep.


Asunto(s)
Electroencefalografía , Sueño de Onda Lenta , Electroencefalografía/métodos , Sueño/fisiología , Sueño de Onda Lenta/fisiología , Fases del Sueño/fisiología , Estimulación Acústica/métodos
6.
Front Hum Neurosci ; 14: 91, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32265675

RESUMEN

Spatial selective attention greatly affects our processing of complex visual scenes, yet the way in which the brain selects relevant objects while suppressing irrelevant objects is still unclear. Evidence of these processes has been found using non-invasive electroencephalography (EEG). However, few studies have characterized these measures during attention to dynamic stimuli, and little is known regarding how these measures change with increased scene complexity. Here, we compared attentional modulation of the EEG N1 and alpha power (oscillations between 8-14 Hz) across three visual selective attention tasks. The tasks differed in the number of irrelevant stimuli presented, but all required sustained attention to the orientation trajectory of a lateralized stimulus. In scenes with few irrelevant stimuli, top-down control of spatial attention is associated with strong modulation of both the N1 and alpha power across parietal-occipital channels. In scenes with many irrelevant stimuli in both hemifields, however, top-down control is no longer represented by strong modulation of alpha power, and N1 amplitudes are overall weaker. These results suggest that as a scene becomes more complex, requiring suppression in both hemifields, the neural signatures of top-down control degrade, likely reflecting some limitation in EEG to represent this suppression.

8.
Hear Res ; 349: 98-110, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-27815131

RESUMEN

Recent anecdotal reports from VA audiology clinics as well as a few published studies have identified a sub-population of Service Members seeking treatment for problems communicating in everyday, noisy listening environments despite having normal to near-normal hearing thresholds. Because of their increased risk of exposure to dangerous levels of prolonged noise and transient explosive blast events, communication problems in these soldiers could be due to either hearing loss (traditional or "hidden") in the auditory sensory periphery or from blast-induced injury to cortical networks associated with attention. We found that out of the 14 blast-exposed Service Members recruited for this study, 12 had hearing thresholds in the normal to near-normal range. A majority of these participants reported having problems specifically related to failures with selective attention. Envelope following responses (EFRs) measuring neural coding fidelity of the auditory brainstem to suprathreshold sounds were similar between blast-exposed and non-blast controls. Blast-exposed subjects performed substantially worse than non-blast controls in an auditory selective attention task in which listeners classified the melodic contour (rising, falling, or "zig-zagging") of one of three simultaneous, competing tone sequences. Salient pitch and spatial differences made for easy segregation of the three concurrent melodies. Poor performance in the blast-exposed subjects was associated with weaker evoked response potentials (ERPs) in frontal EEG channels, as well as a failure of attention to enhance the neural responses evoked by a sequence when it was the target compared to when it was a distractor. These results suggest that communication problems in these listeners cannot be explained by compromised sensory representations in the auditory periphery, but rather point to lingering blast-induced damage to cortical networks implicated in the control of attention. Because all study participants also suffered from post-traumatic disorder (PTSD), follow-up studies are required to tease apart the contributions of PTSD and blast-induced injury on cognitive performance.


Asunto(s)
Percepción Auditiva , Traumatismos por Explosión/psicología , Cognición , Explosiones , Pérdida Auditiva Provocada por Ruido/psicología , Ruido en el Ambiente de Trabajo/efectos adversos , Exposición Profesional/efectos adversos , Traumatismos Ocupacionales/psicología , Personas con Deficiencia Auditiva/psicología , Veteranos/psicología , Estimulación Acústica , Adulto , Vías Auditivas/fisiopatología , Umbral Auditivo , Traumatismos por Explosión/diagnóstico , Traumatismos por Explosión/etiología , Traumatismos por Explosión/fisiopatología , Estudios de Casos y Controles , Señales (Psicología) , Electroencefalografía , Audición , Pérdida Auditiva Provocada por Ruido/diagnóstico , Pérdida Auditiva Provocada por Ruido/etiología , Pérdida Auditiva Provocada por Ruido/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Traumatismos Ocupacionales/diagnóstico , Traumatismos Ocupacionales/etiología , Traumatismos Ocupacionales/fisiopatología , Patrones de Reconocimiento Fisiológico , Percepción de la Altura Tonal , Percepción del Habla , Adulto Joven
9.
Front Hum Neurosci ; 8: 988, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25538607

RESUMEN

Music perception builds on expectancy in harmony, melody, and rhythm. Neural responses to the violations of such expectations are observed in event-related potentials (ERPs) measured using electroencephalography. Most previous ERP studies demonstrating sensitivity to musical violations used stimuli that were temporally regular and musically structured, with less-frequent deviant events that differed from a specific expectation in some feature such as pitch, harmony, or rhythm. Here, we asked whether expectancies about Western musical scale are strong enough to elicit ERP deviance components. Specifically, we explored whether pitches inconsistent with an established scale context elicit deviant components even though equally rare pitches that fit into the established context do not, and even when their timing is unpredictable. We used Markov chains to create temporally irregular pseudo-random sequences of notes chosen from one of two diatonic scales. The Markov pitch-transition probabilities resulted in sequences that favored notes within the scale, but that lacked clear melodic, harmonic, or rhythmic structure. At the random positions, the sequence contained probe tones that were either within the established scale or were out of key. Our subjects ignored the note sequences, watching a self-selected silent movie with subtitles. Compared to the in-key probes, the out-of-key probes elicited a significantly larger P2 ERP component. Results show that random note sequences establish expectations of the "first-order" statistical property of musical key, even in listeners not actively monitoring the sequences.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA