Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 161
Filter
1.
Hum Brain Mapp ; 45(3): e26627, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38376166

ABSTRACT

The hippocampus and parahippocampal gyrus have been implicated as part of a tinnitus network by a number of studies. These structures are usually considered in the context of a "limbic system," a concept typically invoked to explain the emotional response to tinnitus. Despite this common framing, it is not apparent from current literature that this is necessarily the main functional role of these structures in persistent tinnitus. Here, we highlight a different role that encompasses their most commonly implicated functional position within the brain-that is, as a memory system. We consider tinnitus as an auditory object that is held in memory, which may be made persistent by associated activity from the hippocampus and parahippocampal gyrus. Evidence from animal and human studies implicating these structures in tinnitus is reviewed and used as an anchor for this hypothesis. We highlight the potential for the hippocampus/parahippocampal gyrus to facilitate maintenance of the memory of the tinnitus percept via communication with auditory cortex, rather than (or in addition to) mediating emotional responses to this percept.


Subject(s)
Auditory Cortex , Tinnitus , Animals , Humans , Tinnitus/diagnostic imaging , Hippocampus/diagnostic imaging , Parahippocampal Gyrus/diagnostic imaging , Limbic System
3.
J Assoc Res Otolaryngol ; 24(6): 607-617, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38062284

ABSTRACT

OBJECTIVES: Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN: Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS: No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION: Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Speech , Noise
4.
Nat Commun ; 14(1): 6264, 2023 10 07.
Article in English | MEDLINE | ID: mdl-37805497

ABSTRACT

The human brain extracts meaning using an extensive neural system for semantic knowledge. Whether broadly distributed systems depend on or can compensate after losing a highly interconnected hub is controversial. We report intracranial recordings from two patients during a speech prediction task, obtained minutes before and after neurosurgical treatment requiring disconnection of the left anterior temporal lobe (ATL), a candidate semantic knowledge hub. Informed by modern diaschisis and predictive coding frameworks, we tested hypotheses ranging from solely neural network disruption to complete compensation by the indirectly affected language-related and speech-processing sites. Immediately after ATL disconnection, we observed neurophysiological alterations in the recorded frontal and auditory sites, providing direct evidence for the importance of the ATL as a semantic hub. We also obtained evidence for rapid, albeit incomplete, attempts at neural network compensation, with neural impact largely in the forms stipulated by the predictive coding framework, in specificity, and the modern diaschisis framework, more generally. The overall results validate these frameworks and reveal an immediate impact and capability of the human brain to adjust after losing a brain hub.


Subject(s)
Diaschisis , Semantics , Humans , Brain Mapping/methods , Magnetic Resonance Imaging , Temporal Lobe/surgery , Temporal Lobe/physiology
5.
Ear Hear ; 44(5): 1107-1120, 2023.
Article in English | MEDLINE | ID: mdl-37144890

ABSTRACT

OBJECTIVES: Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN: We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS: In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS: These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Speech , Individuality , Noise , Speech Perception/physiology
6.
Cereb Cortex ; 33(14): 9105-9116, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37246155

ABSTRACT

The perception of pitch is a fundamental percept, which is mediated by the auditory system, requiring the abstraction of stimulus properties related to the spectro-temporal structure of sound. Despite its importance, there is still debate as to the precise areas responsible for its encoding, which may be due to species differences or differences in the recording measures and choices of stimuli used in previous studies. Moreover, it was unknown whether the human brain contains pitch neurons and how distributed such neurons might be. Here, we present the first study to measure multiunit neural activity in response to pitch stimuli in the auditory cortex of intracranially implanted humans. The stimulus sets were regular-interval noise with a pitch strength that is related to the temporal regularity and a pitch value determined by the repetition rate and harmonic complexes. Specifically, we demonstrate reliable responses to these different pitch-inducing paradigms that are distributed throughout Heschl's gyrus, rather than being localized to a particular region, and this finding was evident regardless of the stimulus presented. These data provide a bridge across animal and human studies and aid our understanding of the processing of a critical percept associated with acoustic stimuli.


Subject(s)
Auditory Cortex , Animals , Humans , Auditory Cortex/physiology , Pitch Perception/physiology , Acoustic Stimulation , Brain Mapping , Evoked Potentials, Auditory/physiology , Auditory Perception
7.
Cell Rep ; 42(5): 112422, 2023 05 30.
Article in English | MEDLINE | ID: mdl-37099422

ABSTRACT

Humans use predictions to improve speech perception, especially in noisy environments. Here we use 7-T functional MRI (fMRI) to decode brain representations of written phonological predictions and degraded speech signals in healthy humans and people with selective frontal neurodegeneration (non-fluent variant primary progressive aphasia [nfvPPA]). Multivariate analyses of item-specific patterns of neural activation indicate dissimilar representations of verified and violated predictions in left inferior frontal gyrus, suggestive of processing by distinct neural populations. In contrast, precentral gyrus represents a combination of phonological information and weighted prediction error. In the presence of intact temporal cortex, frontal neurodegeneration results in inflexible predictions. This manifests neurally as a failure to suppress incorrect predictions in anterior superior temporal gyrus and reduced stability of phonological representations in precentral gyrus. We propose a tripartite speech perception network in which inferior frontal gyrus supports prediction reconciliation in echoic memory, and precentral gyrus invokes a motor model to instantiate and refine perceptual predictions for speech.


Subject(s)
Motor Cortex , Speech , Humans , Speech/physiology , Brain Mapping , Frontal Lobe/physiology , Brain , Temporal Lobe , Magnetic Resonance Imaging/methods
8.
Front Neurosci ; 17: 1077344, 2023.
Article in English | MEDLINE | ID: mdl-36824211

ABSTRACT

Problems with speech-in-noise (SiN) perception are extremely common in hearing loss. Clinical tests have generally been based on measurement of SiN. My group has developed an approach to SiN based on the auditory cognitive mechanisms that subserve this, that might be relevant to speakers of any language. I describe how well these predict SiN, the brain systems for them, and tests of auditory cognition based on them that might be used to characterise SiN deficits in the clinic.

9.
Schizophr Bull ; 49(12 Suppl 2): S33-S40, 2023 02 24.
Article in English | MEDLINE | ID: mdl-36840541

ABSTRACT

BACKGROUND AND HYPOTHESIS: Patients with hearing impairment (HI) may experience hearing sounds without external sources, ranging from random meaningless noises (tinnitus) to music and other auditory hallucinations (AHs) with meaningful qualities. To ensure appropriate assessment and management, clinicians need to be aware of these phenomena. However, sensory impairment studies have shown that such clinical awareness is low. STUDY DESIGN: An online survey was conducted investigating awareness of AHs among clinicians and their opinions about these hallucinations. STUDY RESULTS: In total, 125 clinicians (68.8% audiologists; 18.4% Ear-Nose-Throat [ENT] specialists) across 10 countries participated in the survey. The majority (96.8%) was at least slightly aware of AHs in HI. About 69.6% of participants reported encountering patients with AHs less than once every 6 months in their clinic. Awareness was significantly associated with clinicians' belief that patients feel anxious about their hallucinations (ß = .018, t(118) = 2.47, P < .01), their belief that clinicians should be more aware of these hallucinations (ß =.018, t(118) = 2.60, P < .01), and with confidence of clinicians in their skills to assess them (ß = .017, t(118) = 2.63, P < .01). Clinicians felt underequipped to treat AHs (Median = 31; U = 1838; PFDRadj < .01). CONCLUSIONS: Awareness of AHs among the surveyed clinicians was high. Yet, the low frequency of encounters with hallucinating patients and their belief in music as the most commonly perceived sound suggest unreported cases. Clinicians in this study expressed a lack of confidence regarding the assessment and treatment of AHs and welcome more information.


Subject(s)
Disabled Persons , Hearing Loss , Humans , Hallucinations , Emotions , Anxiety
10.
Neuroscientist ; : 10738584221126090, 2022 Sep 28.
Article in English | MEDLINE | ID: mdl-36169300

ABSTRACT

Sensory loss in olfaction, vision, and hearing is a risk factor for dementia, but the reasons for this are unclear. This review presents the neurobiological evidence linking each sensory modality to specific dementias and explores the potential mechanisms underlying this. Olfactory deficits can be linked to direct neuropathologic changes in the olfactory system due to Alzheimer disease and Parkinson disease, and may be a marker of disease severity. Visual deficits potentially increase dementia risk in a vulnerable individual by reducing resilience to dementia. Hearing deficits may indicate a susceptibility to Alzheimer disease through a variety of mechanisms. More generally, sensory impairment could be related to factors associated with resilience against dementia. Further research is needed to tease out the specific and synergistic effects of sensory impairment. Studying sensory loss in relation to neurodegenerative biomarkers is necessary to clarify the mechanisms involved. This could produce new monitoring and management strategies for people at risk of dementia.

11.
Elife ; 112022 09 27.
Article in English | MEDLINE | ID: mdl-36164823

ABSTRACT

A new imaging method reveals previously undetected structural differences that may contribute to developmental language disorder.


Subject(s)
Brain Mapping , Brain , Brain Mapping/methods , Magnetic Resonance Imaging/methods
12.
J Acoust Soc Am ; 152(1): 31, 2022 07.
Article in English | MEDLINE | ID: mdl-35931555

ABSTRACT

Pitch discrimination is better for complex tones than pure tones, but how pitch discrimination differs between natural and artificial sounds is not fully understood. This study compared pitch discrimination thresholds for flat-spectrum harmonic complex tones with those for natural sounds played by musical instruments of three different timbres (violin, trumpet, and flute). To investigate whether natural familiarity with sounds of particular timbres affects pitch discrimination thresholds, this study recruited non-musicians and musicians who were trained on one of the three instruments. We found that flautists and trumpeters could discriminate smaller differences in pitch for artificial flat-spectrum tones, despite their unfamiliar timbre, than for sounds played by musical instruments, which are regularly heard in everyday life (particularly by musicians who play those instruments). Furthermore, thresholds were no better for the instrument a musician was trained to play than for other instruments, suggesting that even extensive experience listening to and producing sounds of particular timbres does not reliably improve pitch discrimination thresholds for those timbres. The results show that timbre familiarity provides minimal improvements to auditory acuity, and physical acoustics (e.g., the presence of equal-amplitude harmonics) determine pitch discrimination thresholds more than does experience with natural sounds and timbre-specific training.


Subject(s)
Music , Pitch Discrimination , Auditory Perception , Discrimination, Psychological , Pitch Perception , Recognition, Psychology
13.
Prog Neurobiol ; 218: 102326, 2022 11.
Article in English | MEDLINE | ID: mdl-35870677

ABSTRACT

The hippocampus has a well-established role in spatial and episodic memory but a broader function has been proposed including aspects of perception and relational processing. Neural bases of sound analysis have been described in the pathway to auditory cortex, but wider networks supporting auditory cognition are still being established. We review what is known about the role of the hippocampus in processing auditory information, and how the hippocampus itself is shaped by sound. In examining imaging, recording, and lesion studies in species from rodents to humans, we uncover a hierarchy of hippocampal responses to sound including during passive exposure, active listening, and the learning of associations between sounds and other stimuli. We describe how the hippocampus' connectivity and computational architecture allow it to track and manipulate auditory information - whether in the form of speech, music, or environmental, emotional, or phantom sounds. Functional and structural correlates of auditory experience are also identified. The extent of auditory-hippocampal interactions is consistent with the view that the hippocampus makes broad contributions to perception and cognition, beyond spatial and episodic memory. More deeply understanding these interactions may unlock applications including entraining hippocampal rhythms to support cognition, and intervening in links between hearing loss and dementia.


Subject(s)
Auditory Cortex , Hippocampus , Auditory Perception/physiology , Cognition , Hearing , Hippocampus/physiology , Humans , Learning/physiology
14.
Hear Res ; 422: 108524, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35691269

ABSTRACT

Speech-in-noise difficulty is commonly reported among hearing-impaired individuals. Recent work has established generic behavioural measures of sound segregation and grouping that are related to speech-in-noise processing but do not require language. In this study, we assessed potential clinical electroencephalographic (EEG) measures of central auditory grouping (stochastic figure-ground test) and speech-in-noise perception (speech-in-babble test) with and without relevant tasks. Auditory targets were presented within background noise (16 talker-babble or randomly generated pure-tones) in 50% of the trials and composed either a figure (pure-tone frequency chords repeating over time) or speech (English names), while the rest of the trials only had background noise. EEG was recorded while participants were presented with the target stimuli (figure or speech) under different attentional states (relevant task or visual-distractor task). EEG time-domain analysis demonstrated enhanced negative responses during detection of both types of auditory targets within the time window 150-350 ms but only figure detection produced significantly enhanced responses under the distracted condition. Further single-channel analysis showed that simple vertex-to-mastoid acquisition defines a very similar response to more complex arrays based on multiple channels. Evoked-potentials to the generic figure-ground task therefore represent a potential clinical measure of grouping relevant to real-world listening that can be assessed irrespective of language knowledge and expertise even without a relevant task.


Subject(s)
Noise , Speech Perception , Auditory Perception , Electroencephalography , Hearing , Humans , Noise/adverse effects , Speech Perception/physiology
15.
Sci Rep ; 12(1): 3517, 2022 03 03.
Article in English | MEDLINE | ID: mdl-35241747

ABSTRACT

Previous studies have found conflicting results between individual measures related to music and fundamental aspects of auditory perception and cognition. The results have been difficult to compare because of different musical measures being used and lack of uniformity in the auditory perceptual and cognitive measures. In this study we used a general construct of musicianship, musical sophistication, that can be applied to populations with widely different backgrounds. We investigated the relationship between musical sophistication and measures of perception and working memory for sound by using a task suitable to measure both. We related scores from the Goldsmiths Musical Sophistication Index to performance on tests of perception and working memory for two acoustic features-frequency and amplitude modulation. The data show that musical sophistication scores are best related to working memory for frequency in an analysis that accounts for age and non-verbal intelligence. Musical sophistication was not significantly associated with working memory for amplitude modulation rate or with the perception of either acoustic feature. The work supports a specific association between musical sophistication and working memory for sound frequency.


Subject(s)
Memory, Short-Term , Music , Acoustic Stimulation , Auditory Perception , Cognition , Music/psychology
16.
Neuroimage ; 249: 118879, 2022 04 01.
Article in English | MEDLINE | ID: mdl-34999204

ABSTRACT

We recorded neural responses in human participants to three types of pitch-evoking regular stimuli at rates below and above the lower limit of pitch using magnetoencephalography (MEG). These bandpass filtered (1-4 kHz) stimuli were harmonic complex tones (HC), click trains (CT), and regular interval noise (RIN). Trials consisted of noise-regular-noise (NRN) or regular-noise-regular (RNR) segments in which the repetition rate (or fundamental frequency F0) was either above (250 Hz) or below (20 Hz) the lower limit of pitch. Neural activation was estimated and compared at the senor and source levels. The pitch-relevant regular stimuli (F0 = 250 Hz) were all associated with marked evoked responses at around 140 ms after noise-to-regular transitions at both sensor and source levels. In particular, greater evoked responses to pitch-relevant stimuli than pitch-irrelevant stimuli (F0 = 20 Hz) were localized along the Heschl's sulcus around 140 ms. The regularity-onset responses for RIN were much weaker than for the other types of regular stimuli (HC, CT). This effect was localized over planum temporale, planum polare, and lateral Heschl's gyrus. Importantly, the effect of pitch did not interact with the stimulus type. That is, we did not find evidence to support different responses for different types of regular stimuli from the spatiotemporal cluster of the pitch effect (∼140 ms). The current data demonstrate cortical sensitivity to temporal regularity relevant to pitch that is consistently present across different pitch-relevant stimuli in the Heschl's sulcus between Heschl's gyrus and planum temporale, both of which have been identified as a "pitch center" based on different modalities.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Magnetoencephalography , Pitch Perception/physiology , Time Perception/physiology , Adult , Female , Humans , Male , Young Adult
17.
Cereb Cortex ; 32(16): 3568-3580, 2022 08 03.
Article in English | MEDLINE | ID: mdl-34875029

ABSTRACT

Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.


Subject(s)
Auditory Cortex , Acoustic Stimulation/methods , Animals , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping/methods , Humans , Macaca mulatta
18.
Neurobiol Lang (Camb) ; 3(4): 515-537, 2022.
Article in English | MEDLINE | ID: mdl-37215340

ABSTRACT

Recent mechanistic models argue for a key role of rhythm processing in both speech production and speech perception. Patients with the non-fluent variant (NFV) of primary progressive aphasia (PPA) with apraxia of speech (AOS) represent a specific study population in which this link can be examined. Previously, we observed impaired rhythm processing in NFV with AOS. We hypothesized that a shared neurocomputational mechanism structures auditory input (sound and speech) and output (speech production) in time, a "temporal scaffolding" mechanism. Since considerable white matter damage is observed in NFV, we test here whether white matter changes are related to impaired rhythm processing. Forty-seven participants performed a psychoacoustic test battery: 12 patients with NFV and AOS, 11 patients with the semantic variant of PPA, and 24 cognitively intact age- and education-matched controls. Deformation-based morphometry was used to test whether white matter volume correlated to rhythmic abilities. In 34 participants, we also obtained tract-based metrics of the left Aslant tract, which is typically damaged in patients with NFV. Nine out of 12 patients with NFV displayed impaired rhythmic processing. Left frontal white matter atrophy adjacent to the supplementary motor area (SMA) correlated with poorer rhythmic abilities. The structural integrity of the left Aslant tract also correlated with rhythmic abilities. A colocalized and perhaps shared white matter substrate adjacent to the SMA is associated with impaired rhythmic processing and motor speech impairment. Our results support the existence of a temporal scaffolding mechanism structuring perceptual input and speech output.

19.
Neurosci Biobehav Rev ; 131: 1288-1304, 2021 12.
Article in English | MEDLINE | ID: mdl-34687699

ABSTRACT

In this paper, we introduce a new generative model for an active inference account of preparatory and selective attention, in the context of a classic 'cocktail party' paradigm. In this setup, pairs of words are presented simultaneously to the left and right ears and an instructive spatial cue directs attention to the left or right. We use this generative model to test competing hypotheses about the way that human listeners direct preparatory and selective attention. We show that assigning low precision to words at attended-relative to unattended-locations can explain why a listener reports words from a competing sentence. Under this model, temporal changes in sensory precision were not needed to account for faster reaction times with longer cue-target intervals, but were necessary to explain ramping effects on event-related potentials (ERPs)-resembling the contingent negative variation (CNV)-during the preparatory interval. These simulations reveal that different processes are likely to underlie the improvement in reaction times and the ramping of ERPs that are associated with spatial cueing.


Subject(s)
Speech Perception , Cues , Electroencephalography , Evoked Potentials , Humans , Reaction Time
20.
Eur J Neurosci ; 54(9): 7274-7288, 2021 11.
Article in English | MEDLINE | ID: mdl-34549472

ABSTRACT

Auditory object analysis requires the fundamental perceptual process of detecting boundaries between auditory objects. However, the dynamics underlying the identification of discontinuities at object boundaries are not well understood. Here, we employed a synthetic stimulus composed of frequency-modulated ramps known as 'acoustic textures', where boundaries were created by changing the underlying spectrotemporal statistics. We collected magnetoencephalographic (MEG) data from human volunteers and observed a slow (<1 Hz) post-boundary drift in the neuromagnetic signal. The response evoking this drift signal was source localised close to Heschl's gyrus (HG) bilaterally, which is in agreement with a previous functional magnetic resonance imaging (fMRI) study that found HG to be involved in the detection of similar auditory object boundaries. Time-frequency analysis demonstrated suppression in alpha and beta bands that occurred after the drift signal.


Subject(s)
Auditory Cortex , Acoustic Stimulation , Brain Mapping , Evoked Potentials, Auditory , Humans , Magnetic Resonance Imaging , Magnetoencephalography
SELECTION OF CITATIONS
SEARCH DETAIL
...