Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters











Publication year range
1.
Psychon Bull Rev ; 31(1): 137-147, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37430179

ABSTRACT

The auditory world is often cacophonous, with some sounds capturing attention and distracting us from our goals. Despite the universality of this experience, many questions remain about how and why sound captures attention, how rapidly behavior is disrupted, and how long this interference lasts. Here, we use a novel measure of behavioral disruption to test predictions made by models of auditory salience. Models predict that goal-directed behavior is disrupted immediately after points in time that feature a high degree of spectrotemporal change. We find that behavioral disruption is precisely time-locked to the onset of distracting sound events: Participants who tap to a metronome temporarily increase their tapping speed 750 ms after the onset of distractors. Moreover, this response is greater for more salient sounds (larger amplitude) and sound changes (greater pitch shift). We find that the time course of behavioral disruption is highly similar after acoustically disparate sound events: Both sound onsets and pitch shifts of continuous background sounds speed responses at 750 ms, with these effects dying out by 1,750 ms. These temporal distortions can be observed using only data from the first trial across participants. A potential mechanism underlying these results is that arousal increases after distracting sound events, leading to an expansion of time perception, and causing participants to misjudge when their next movement should begin.


Subject(s)
Time Perception , Humans , Acoustic Stimulation , Sound , Attention/physiology , Auditory Perception/physiology , Pitch Perception/physiology
2.
Cognition ; 244: 105696, 2024 03.
Article in English | MEDLINE | ID: mdl-38160651

ABSTRACT

From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.


Subject(s)
Music , Speech Perception , Humans , Auditory Perception , Pitch Perception , Cognition , Acoustic Stimulation
3.
Cereb Cortex ; 33(9): 5704-5716, 2023 04 25.
Article in English | MEDLINE | ID: mdl-36520483

ABSTRACT

Quantitative magnetic resonance imaging (qMRI) allows extraction of reproducible and robust parameter maps. However, the connection to underlying biological substrates remains murky, especially in the complex, densely packed cortex. We investigated associations in human neocortex between qMRI parameters and neocortical cell types by comparing the spatial distribution of the qMRI parameters longitudinal relaxation rate (${R_{1}}$), effective transverse relaxation rate (${R_{2}}^{\ast }$), and magnetization transfer saturation (MTsat) to gene expression from the Allen Human Brain Atlas, then combining this with lists of genes enriched in specific cell types found in the human brain. As qMRI parameters are magnetic field strength-dependent, the analysis was performed on MRI data at 3T and 7T. All qMRI parameters significantly covaried with genes enriched in GABA- and glutamatergic neurons, i.e. they were associated with cytoarchitecture. The qMRI parameters also significantly covaried with the distribution of genes enriched in astrocytes (${R_{2}}^{\ast }$ at 3T, ${R_{1}}$ at 7T), endothelial cells (${R_{1}}$ and MTsat at 3T), microglia (${R_{1}}$ and MTsat at 3T, ${R_{1}}$ at 7T), and oligodendrocytes and oligodendrocyte precursor cells (${R_{1}}$ at 7T). These results advance the potential use of qMRI parameters as biomarkers for specific cell types.


Subject(s)
Neocortex , Humans , Endothelial Cells , Magnetic Resonance Imaging/methods , Brain/pathology , Brain Mapping/methods
4.
Neuroimage ; 252: 119024, 2022 05 15.
Article in English | MEDLINE | ID: mdl-35231629

ABSTRACT

To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.


Subject(s)
Auditory Cortex , Electroencephalography , Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Caffeine , Electroencephalography/methods , Humans , Sound
5.
Neuroimage ; 245: 118764, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34848301

ABSTRACT

Prior studies have shown that the left posterior superior temporal sulcus (pSTS) and left temporo-parietal junction (TPJ) both contribute to phonological short-term memory, speech perception and speech production. Here, by conducting a within-subjects multi-factorial fMRI study, we dissociate the response profiles of these regions and a third region - the anterior ascending terminal branch of the left superior temporal sulcus (atSTS), which lies dorsal to pSTS and ventral to TPJ. First, we show that each region was more activated by (i) 1-back matching on visually presented verbal stimuli (words or pseudowords) compared to 1-back matching on visually presented non-verbal stimuli (pictures of objects or non-objects), and (ii) overt speech production than 1-back matching, across 8 types of stimuli (visually presented words, pseudowords, objects and non-objects and aurally presented words, pseudowords, object sounds and meaningless hums). The response properties of the three regions dissociated within the auditory modality. In left TPJ, activation was higher for auditory stimuli that were non-verbal (sounds of objects or meaningless hums) compared to verbal (words and pseudowords), irrespective of task (speech production or 1-back matching). In left pSTS, activation was higher for non-semantic stimuli (pseudowords and hums) than semantic stimuli (words and object sounds) on the dorsal pSTS surface (dpSTS), irrespective of task. In left atSTS, activation was not sensitive to either semantic or verbal content. The contrasting response properties of left TPJ, dpSTS and atSTS was cross-validated in an independent sample of 59 participants, using region-by-condition interactions. We also show that each region participates in non-overlapping networks of frontal, parietal and cerebellar regions. Our results challenge previous claims about functional specialisation in the left posterior superior temporal lobe and motivate future studies to determine the timing and directionality of information flow in the brain networks involved in speech perception and production.


Subject(s)
Brain Mapping , Cerebellum/physiology , Cerebral Cortex/physiology , Nerve Net/physiology , Psycholinguistics , Speech Perception/physiology , Speech/physiology , Temporal Lobe/physiology , Adult , Cerebellum/diagnostic imaging , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Nerve Net/diagnostic imaging , Reading , Temporal Lobe/diagnostic imaging , Young Adult
6.
Neuroimage ; 244: 118544, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34492294

ABSTRACT

Some theories of auditory categorization suggest that auditory dimensions that are strongly diagnostic for particular categories - for instance voice onset time or fundamental frequency in the case of some spoken consonants - attract attention. However, prior cognitive neuroscience research on auditory selective attention has largely focused on attention to simple auditory objects or streams, and so little is known about the neural mechanisms that underpin dimension-selective attention, or how the relative salience of variations along these dimensions might modulate neural signatures of attention. Here we investigate whether dimensional salience and dimension-selective attention modulate the cortical tracking of acoustic dimensions. In two experiments, participants listened to tone sequences varying in pitch and spectral peak frequency; these two dimensions changed at different rates. Inter-trial phase coherence (ITPC) and amplitude of the EEG signal at the frequencies tagged to pitch and spectral changes provided a measure of cortical tracking of these dimensions. In Experiment 1, tone sequences varied in the size of the pitch intervals, while the size of spectral peak intervals remained constant. Cortical tracking of pitch changes was greater for sequences with larger compared to smaller pitch intervals, with no difference in cortical tracking of spectral peak changes. In Experiment 2, participants selectively attended to either pitch or spectral peak. Cortical tracking was stronger in response to the attended compared to unattended dimension for both pitch and spectral peak. These findings suggest that attention can enhance the cortical tracking of specific acoustic dimensions rather than simply enhancing tracking of the auditory object as a whole.


Subject(s)
Acoustics , Attention/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Adult , Cognitive Neuroscience , Electroencephalography , Female , Humans , Male , Middle Aged , Pitch Perception/physiology , Voice
7.
Neuroimage ; 224: 117396, 2021 01 01.
Article in English | MEDLINE | ID: mdl-32979522

ABSTRACT

To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.


Subject(s)
Attention Deficit Disorder with Hyperactivity/physiopathology , Auditory Cortex/physiopathology , Auditory Perception/physiology , Sound , Acoustic Stimulation/methods , Auditory Cortex/physiology , Child , Electroencephalography/methods , Female , Humans , Male
8.
Nat Neurosci ; 23(5): 611-614, 2020 05.
Article in English | MEDLINE | ID: mdl-32313267

ABSTRACT

The human arcuate fasciculus pathway is crucial for language, interconnecting posterior temporal and inferior frontal areas. Whether a monkey homolog exists is controversial and the nature of human-specific specialization unclear. Using monkey, ape and human auditory functional fields and diffusion-weighted MRI, we identified homologous pathways originating from the auditory cortex. This discovery establishes a primate auditory prototype for the arcuate fasciculus, reveals an earlier phylogenetic origin and illuminates its remarkable transformation.


Subject(s)
Auditory Cortex , Auditory Pathways , Biological Evolution , Language , Animals , Diffusion Tensor Imaging , Humans , Macaca , Pan troglodytes
9.
Neuroimage ; 213: 116717, 2020 06.
Article in English | MEDLINE | ID: mdl-32165265

ABSTRACT

How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 â€‹h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Brain/physiology , Individuality , Acoustic Stimulation , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Young Adult
10.
J Exp Psychol Learn Mem Cogn ; 46(5): 968-979, 2020 May.
Article in English | MEDLINE | ID: mdl-31580123

ABSTRACT

Speech is more difficult to understand when it is presented concurrently with a distractor speech stream. One source of this difficulty is that competing speech can act as an attentional lure, requiring listeners to exert attentional control to ensure that attention does not drift away from the target. Stronger attentional control may enable listeners to more successfully ignore distracting speech, and so individual differences in selective attention may be one factor driving the ability to perceive speech in complex environments. However, the lack of a paradigm for measuring nonverbal sustained selective attention to sound has made this hypothesis difficult to test. Here we find that individuals who are better able to attend to a stream of tones and respond to occasional repeated sequences while ignoring a distractor tone stream are also better able to perceive speech masked by a single distractor talker. We also find that participants who have undergone more musical training show better performance on both verbal and nonverbal selective attention tasks, and this musician advantage is greater in older participants. This suggests that one source of a potential musician advantage for speech perception in complex environments may be experience or skill in directing and maintaining attention to a single auditory object. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention , Learning , Music , Speech Perception , Adolescent , Adult , Age Factors , Aged , Aging/psychology , Cognition , Executive Function , Female , Humans , Male , Middle Aged , Periodicity , Pitch Perception , Professional Competence , Psychological Tests , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL