Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(13): e2117000119, 2022 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-35312362

RESUMO

SignificanceSyllables are important building blocks of speech. They occur at a rate between 4 and 8 Hz, corresponding to the theta frequency range of neural activity in the cerebral cortex. When listening to speech, the theta activity becomes aligned to the syllabic rhythm, presumably aiding in parsing a speech signal into distinct syllables. However, this neural activity cannot only be influenced by sound, but also by somatosensory information. Here, we show that the presentation of vibrotactile signals at the syllabic rate can enhance the comprehension of speech in background noise. We further provide evidence that this multisensory enhancement of speech comprehension reflects the multisensory integration of auditory and tactile information in the auditory cortex.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Córtex Auditivo/fisiologia , Compreensão/fisiologia , Fala/fisiologia , Percepção da Fala/fisiologia
2.
J Neurosci ; 43(44): 7429-7440, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37793908

RESUMO

Selective attention to one of several competing speakers is required for comprehending a target speaker among other voices and for successful communication with them. It moreover has been found to involve the neural tracking of low-frequency speech rhythms in the auditory cortex. Effects of selective attention have also been found in subcortical neural activities, in particular regarding the frequency-following response related to the fundamental frequency of speech (speech-FFR). Recent investigations have, however, shown that the speech-FFR contains cortical contributions as well. It remains unclear whether these are also modulated by selective attention. Here we used magnetoencephalography to assess the attentional modulation of the cortical contributions to the speech-FFR. We presented both male and female participants with two competing speech signals and analyzed the cortical responses during attentional switching between the two speakers. Our findings revealed robust attentional modulation of the cortical contribution to the speech-FFR: the neural responses were higher when the speaker was attended than when they were ignored. We also found that, regardless of attention, a voice with a lower fundamental frequency elicited a larger cortical contribution to the speech-FFR than a voice with a higher fundamental frequency. Our results show that the attentional modulation of the speech-FFR does not only occur subcortically but extends to the auditory cortex as well.SIGNIFICANCE STATEMENT Understanding speech in noise requires attention to a target speaker. One of the speech features that a listener can use to identify a target voice among others and attend it is the fundamental frequency, together with its higher harmonics. The fundamental frequency arises from the opening and closing of the vocal folds and is tracked by high-frequency neural activity in the auditory brainstem and in the cortex. Previous investigations showed that the subcortical neural tracking is modulated by selective attention. Here we show that attention affects the cortical tracking of the fundamental frequency as well: it is stronger when a particular voice is attended than when it is ignored.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Masculino , Feminino , Fala , Percepção da Fala/fisiologia , Córtex Auditivo/fisiologia , Magnetoencefalografia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Estimulação Acústica , Eletroencefalografia/métodos
3.
J Cogn Neurosci ; 36(3): 475-491, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38165737

RESUMO

Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Eletroencefalografia/métodos , Fala , Magnetoencefalografia/métodos , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia
4.
J Cogn Neurosci ; 35(11): 1760-1772, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37677062

RESUMO

Syllables are an essential building block of speech. We recently showed that tactile stimuli linked to the perceptual centers of syllables in continuous speech can improve speech comprehension. The rate of syllables lies in the theta frequency range, between 4 and 8 Hz, and the behavioral effect appears linked to multisensory integration in this frequency band. Because this neural activity may be oscillatory, we hypothesized that a behavioral effect may also occur not only while but also after this activity has been evoked or entrained through vibrotactile pulses. Here, we show that audiotactile integration regarding the perception of single syllables, both on the neural and on the behavioral level, is consistent with this hypothesis. We first stimulated participants with a series of vibrotactile pulses and then presented them with a syllable in background noise. We show that, at a delay of 200 msec after the last vibrotactile pulse, audiotactile integration still occurred in the theta band and syllable discrimination was enhanced. Moreover, the dependence of both the neural multisensory integration as well as of the behavioral discrimination on the delay of the audio signal with respect to the last tactile pulse was consistent with a damped oscillation. In addition, the multisensory gain is correlated with the syllable discrimination score. Our results therefore evidence the role of the theta band in audiotactile integration and provide evidence that these effects may involve oscillatory activity that still persists after the tactile stimulation.


Assuntos
Percepção da Fala , Humanos , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Fala/fisiologia , Tato/fisiologia , Ruído
5.
J Cogn Neurosci ; 35(8): 1301-1311, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37379482

RESUMO

The envelope of a speech signal is tracked by neural activity in the cerebral cortex. The cortical tracking occurs mainly in two frequency bands, theta (4-8 Hz) and delta (1-4 Hz). Tracking in the faster theta band has been mostly associated with lower-level acoustic processing, such as the parsing of syllables, whereas the slower tracking in the delta band relates to higher-level linguistic information of words and word sequences. However, much regarding the more specific association between cortical tracking and acoustic as well as linguistic processing remains to be uncovered. Here, we recorded EEG responses to both meaningful sentences and random word lists in different levels of signal-to-noise ratios (SNRs) that lead to different levels of speech comprehension as well as listening effort. We then related the neural signals to the acoustic stimuli by computing the phase-locking value (PLV) between the EEG recordings and the speech envelope. We found that the PLV in the delta band increases with increasing SNR for sentences but not for the random word lists, showing that the PLV in this frequency band reflects linguistic information. When attempting to disentangle the effects of SNR, speech comprehension, and listening effort, we observed a trend that the PLV in the delta band might reflect listening effort rather than the other two variables, although the effect was not statistically significant. In summary, our study shows that the PLV in the delta band reflects linguistic information and might be related to listening effort.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Fala/fisiologia , Eletroencefalografia , Percepção da Fala/fisiologia , Córtex Auditivo/fisiologia , Linguística , Estimulação Acústica
6.
J Acoust Soc Am ; 153(5): 3130, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37249407

RESUMO

Seeing a speaker's face can help substantially with understanding their speech, particularly in challenging listening conditions. Research into the neurobiological mechanisms behind audiovisual integration has recently begun to employ continuous natural speech. However, these efforts are impeded by a lack of high-quality audiovisual recordings of a speaker narrating a longer text. Here, we seek to close this gap by developing AVbook, an audiovisual speech corpus designed for cognitive neuroscience studies and audiovisual speech recognition. The corpus consists of 3.6 h of audiovisual recordings of two speakers, one male and one female, each reading 59 passages from a narrative English text. The recordings were acquired at a high frame rate of 119.88 frames/s. The corpus includes phone-level alignment files and a set of multiple-choice questions to test attention to the different passages. We verified the efficacy of these questions in a pilot study. A short written summary is also provided for each recording. To enable audiovisual synchronization when presenting the stimuli, four videos of an electronic clapperboard were recorded with the corpus. The corpus is publicly available to support research into the neurobiology of audiovisual speech processing as well as the development of computer algorithms for audiovisual speech recognition.


Assuntos
Percepção da Fala , Masculino , Humanos , Feminino , Fala , Projetos Piloto , Percepção Visual , Percepção Auditiva
7.
J Neurosci ; 41(23): 5093-5101, 2021 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-33926996

RESUMO

Understanding speech in background noise is a difficult task. The tracking of speech rhythms such as the rate of syllables and words by cortical activity has emerged as a key neural mechanism for speech-in-noise comprehension. In particular, recent investigations have used transcranial alternating current stimulation (tACS) with the envelope of a speech signal to influence the cortical speech tracking, demonstrating that this type of stimulation modulates comprehension and therefore providing evidence of a functional role of the cortical tracking in speech processing. Cortical activity has been found to track the rhythms of a background speaker as well, but the functional significance of this neural response remains unclear. Here we use a speech-comprehension task with a target speaker in the presence of a distractor voice to show that tACS with the speech envelope of the target voice as well as tACS with the envelope of the distractor speaker both modulate the comprehension of the target speech. Because the envelope of the distractor speech does not carry information about the target speech stream, the modulation of speech comprehension through tACS with this envelope provides evidence that the cortical tracking of the background speaker affects the comprehension of the foreground speech signal. The phase dependency of the resulting modulation of speech comprehension is, however, opposite to that obtained from tACS with the envelope of the target speech signal. This suggests that the cortical tracking of the ignored speech stream and that of the attended speech stream may compete for neural resources.SIGNIFICANCE STATEMENT Loud environments such as busy pubs or restaurants can make conversation difficult. However, they also allow us to eavesdrop into other conversations that occur in the background. In particular, we often notice when somebody else mentions our name, even if we have not been listening to that person. However, the neural mechanisms by which background speech is processed remain poorly understood. Here we use transcranial alternating current stimulation, a technique through which neural activity in the cerebral cortex can be influenced, to show that cortical responses to rhythms in the distractor speech modulate the comprehension of the target speaker. Our results provide evidence that the cortical tracking of background speech rhythms plays a functional role in speech processing.


Assuntos
Córtex Cerebral/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Ruído , Estimulação Transcraniana por Corrente Contínua , Adulto Jovem
8.
J Cogn Neurosci ; 34(3): 411-424, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35015867

RESUMO

Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


Assuntos
Música , Percepção da Fala , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Humanos , Fala , Percepção da Fala/fisiologia
9.
Neuroimage ; 224: 117427, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-33038540

RESUMO

Transcranial alternating current stimulation (tACS) can non-invasively modulate neuronal activity in the cerebral cortex, in particular at the frequency of the applied stimulation. Such modulation can matter for speech processing, since the latter involves the tracking of slow amplitude fluctuations in speech by cortical activity. tACS with a current signal that follows the envelope of a speech stimulus has indeed been found to influence the cortical tracking and to modulate the comprehension of the speech in background noise. However, how exactly tACS influences the speech-related cortical activity, and how it causes the observed effects on speech comprehension, remains poorly understood. A computational model for cortical speech processing in a biophysically plausible spiking neural network has recently been proposed. Here we extended the model to investigate the effects of different types of stimulation waveforms, similar to those previously applied in experimental studies, on the processing of speech in noise. We assessed in particular how well speech could be decoded from the neural network activity when paired with the exogenous stimulation. We found that, in the absence of current stimulation, the speech-in-noise decoding accuracy was comparable to the comprehension of speech in background noise of human listeners. We further found that current stimulation could alter the speech decoding accuracy by a few percent, comparable to the effects of tACS on speech-in-noise comprehension. Our simulations further allowed us to identify the parameters for the stimulation waveforms that yielded the largest enhancement of speech-in-noise encoding. Our model thereby provides insight into the potential neural mechanisms by which weak alternating current stimulation may influence speech comprehension and allows to screen a large range of stimulation waveforms for their effect on speech processing.


Assuntos
Córtex Auditivo/fisiologia , Redes Neurais de Computação , Ruído , Percepção da Fala/fisiologia , Estimulação Transcraniana por Corrente Contínua , Ritmo Delta , Humanos , Razão Sinal-Ruído , Ritmo Teta
10.
J Neurosci ; 39(29): 5750-5759, 2019 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-31109963

RESUMO

Humans excel at understanding speech even in adverse conditions such as background noise. Speech processing may be aided by cortical activity in the delta and theta frequency bands, which have been found to track the speech envelope. However, the rhythm of non-speech sounds is tracked by cortical activity as well. It therefore remains unclear which aspects of neural speech tracking represent the processing of acoustic features, related to the clarity of speech, and which aspects reflect higher-level linguistic processing related to speech comprehension. Here we disambiguate the roles of cortical tracking for speech clarity and comprehension through recording EEG responses to native and foreign language in different levels of background noise, for which clarity and comprehension vary independently. We then use a both a decoding and an encoding approach to relate clarity and comprehension to the neural responses. We find that cortical tracking in the theta frequency band is mainly correlated to clarity, whereas the delta band contributes most to speech comprehension. Moreover, we uncover an early neural component in the delta band that informs on comprehension and that may reflect a predictive mechanism for language processing. Our results disentangle the functional contributions of cortical speech tracking in the delta and theta bands to speech processing. They also show that both speech clarity and comprehension can be accurately decoded from relatively short segments of EEG recordings, which may have applications in future mind-controlled auditory prosthesis.SIGNIFICANCE STATEMENT Speech is a highly complex signal whose processing requires analysis from lower-level acoustic features to higher-level linguistic information. Recent work has shown that neural activity in the delta and theta frequency bands track the rhythm of speech, but the role of this tracking for speech processing remains unclear. Here we disentangle the roles of cortical entrainment in different frequency bands and at different temporal lags for speech clarity, reflecting the acoustics of the signal, and speech comprehension, related to linguistic processing. We show that cortical speech tracking in the theta frequency band encodes mostly speech clarity, and thus acoustic aspects of the signal, whereas speech tracking in the delta band encodes the higher-level speech comprehension.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Ritmo Delta/fisiologia , Ruído , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Fala/fisiologia , Adulto Jovem
11.
J Cogn Neurosci ; 32(1): 155-166, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31479349

RESUMO

Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.


Assuntos
Ondas Encefálicas/fisiologia , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Aprendizado Profundo , Eletroencefalografia , Psicolinguística , Percepção da Fala/fisiologia , Adulto , Feminino , Neuroimagem Funcional , Humanos , Masculino , Adulto Jovem
12.
Pflugers Arch ; 472(5): 625-635, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32318797

RESUMO

In mammals, audition is triggered by travelling waves that are evoked by acoustic stimuli in the cochlear partition, a structure containing sensory hair cells and a basilar membrane. When the cochlea is stimulated by a pure tone of low frequency, a static offset occurs in the vibration in the apical turn. In the high-frequency region at the cochlear base, multi-tone stimuli induce a quadratic distortion product in the vibrations that suggests the presence of an offset. However, vibrations below 100 Hz, including a static offset, have not been directly measured there. We therefore constructed an interferometer for detecting motion at low frequencies including 0 Hz. We applied the interferometer to record vibrations from the cochlear base of guinea pigs in response to pure tones. When the animals were exposed to sound at an intensity of 70 dB or higher, we recorded a static offset of the sinusoidally vibrating cochlear partition by more than 1 nm towards the scala vestibuli. The offset's magnitude grew monotonically as the stimuli intensified. When stimulus frequency was varied, the response peaked around the best frequency, the frequency that maximised the vibration amplitude at threshold sound pressure. These characteristics are consistent with those found in the low-frequency region and are therefore likely common across the cochlea. The offset diminished markedly when the somatic motility of mechanosensitive outer hair cells, the force-generating machinery that amplifies the sinusoidal vibrations, was pharmacologically blocked. Therefore, the partition offset appears to be linked to the electromotile contraction of outer hair cells.


Assuntos
Células Ciliadas Auditivas Externas/fisiologia , Audição , Animais , Limiar Auditivo , Cobaias , Células Ciliadas Vestibulares/fisiologia , Interferometria/instrumentação , Interferometria/métodos , Masculino , Som , Vibração
13.
Neuroimage ; 210: 116557, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31968233

RESUMO

Auditory cortical activity entrains to speech rhythms and has been proposed as a mechanism for online speech processing. In particular, neural activity in the theta frequency band (4-8 â€‹Hz) tracks the onset of syllables which may aid the parsing of a speech stream. Similarly, cortical activity in the delta band (1-4 â€‹Hz) entrains to the onset of words in natural speech and has been found to encode both syntactic as well as semantic information. Such neural entrainment to speech rhythms is not merely an epiphenomenon of other neural processes, but plays a functional role in speech processing: modulating the neural entrainment through transcranial alternating current stimulation influences the speech-related neural activity and modulates the comprehension of degraded speech. However, the distinct functional contributions of the delta- and of the theta-band entrainment to the modulation of speech comprehension have not yet been investigated. Here we use transcranial alternating current stimulation with waveforms derived from the speech envelope and filtered in the delta and theta frequency bands to alter cortical entrainment in both bands separately. We find that transcranial alternating current stimulation in the theta band but not in the delta band impacts speech comprehension. Moreover, we find that transcranial alternating current stimulation with the theta-band portion of the speech envelope can improve speech-in-noise comprehension beyond sham stimulation. Our results show a distinct contribution of the theta- but not of the delta-band stimulation to the modulation of speech comprehension. In addition, our findings open up a potential avenue of enhancing the comprehension of speech in noise.


Assuntos
Córtex Cerebral/fisiologia , Compreensão/fisiologia , Ritmo Delta/fisiologia , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Estimulação Transcraniana por Corrente Contínua , Adulto , Feminino , Humanos , Masculino , Ruído , Adulto Jovem
14.
Neuroimage ; 200: 1-11, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-31212098

RESUMO

Humans are highly skilled at analysing complex acoustic scenes. The segregation of different acoustic streams and the formation of corresponding neural representations is mostly attributed to the auditory cortex. Decoding of selective attention from neuroimaging has therefore focussed on cortical responses to sound. However, the auditory brainstem response to speech is modulated by selective attention as well, as recently shown through measuring the brainstem's response to running speech. Although the response of the auditory brainstem has a smaller magnitude than that of the auditory cortex, it occurs at much higher frequencies and therefore has a higher information rate. Here we develop statistical models for extracting the brainstem response from multi-channel scalp recordings and for analysing the attentional modulation according to the focus of attention. We demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener from short measurements of 10 s or less in duration. The decoding remains accurate when obtained from three EEG channels only. We further show how out-of-the-box decoding that employs subject-independent models, as well as decoding that is independent of the specific attended speaker is capable of achieving similar accuracy. These results open up new avenues for investigating the neural mechanisms for selective attention in the brainstem and for developing efficient auditory brain-computer interfaces.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
PLoS Comput Biol ; 14(1): e1005936, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29351276

RESUMO

The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifies weak sound to boost its detection. Actuators of this active process are sensory outer hair cells in the organ of Corti, whereas the inner hair cells transduce the resulting motion into electric signals that propagate via the auditory nerve to the brain. However, how the outer hair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combine theoretical modeling and experimental measurements near the cochlear apex to study the way in which length changes of the outer hair cells deform the organ of Corti. We develop a geometry-based kinematic model of the apical organ of Corti that reproduces salient, yet counter-intuitive features of the organ's motion. Our analysis further uncovers a mechanism by which a static length change of the outer hair cells can sensitively tune the signal transmitted to the sensory inner hair cells. When the outer hair cells are in an elongated state, stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads to a substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanism for regulating the sensitivity of the hearing organ applies to the low frequencies that are most important for the perception of speech and music. We suggest that the proposed mechanism might underlie frequency discrimination at low auditory frequencies, as well as our ability to selectively attend auditory signals in noisy surroundings.


Assuntos
Cóclea/fisiologia , Células Ciliadas Auditivas Externas/fisiologia , Audição/fisiologia , Órgão Espiral/fisiologia , Animais , Fenômenos Biomecânicos , Biologia Computacional , Elasticidade , Feminino , Cobaias , Células Ciliadas Auditivas Internas/fisiologia , Interferometria , Masculino , Microscopia Confocal , Modelos Biológicos , Movimento (Física) , Música , Neurônios/fisiologia , Processamento de Sinais Assistido por Computador
16.
Proc Natl Acad Sci U S A ; 113(30): E4304-10, 2016 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-27407145

RESUMO

Low-frequency hearing is critically important for speech and music perception, but no mechanical measurements have previously been available from inner ears with intact low-frequency parts. These regions of the cochlea may function in ways different from the extensively studied high-frequency regions, where the sensory outer hair cells produce force that greatly increases the sound-evoked vibrations of the basilar membrane. We used laser interferometry in vitro and optical coherence tomography in vivo to study the low-frequency part of the guinea pig cochlea, and found that sound stimulation caused motion of a minimal portion of the basilar membrane. Outside the region of peak movement, an exponential decline in motion amplitude occurred across the basilar membrane. The moving region had different dependence on stimulus frequency than the vibrations measured near the mechanosensitive stereocilia. This behavior differs substantially from the behavior found in the extensively studied high-frequency regions of the cochlea.


Assuntos
Membrana Basilar/fisiologia , Células Ciliadas Auditivas Externas/fisiologia , Audição/fisiologia , Órgão Espiral/fisiologia , Estimulação Acústica , Animais , Cobaias , Interferometria , Movimento (Física) , Órgão Espiral/citologia , Som , Tomografia de Coerência Óptica
17.
Proc Natl Acad Sci U S A ; 109(51): 21076-80, 2012 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-23213236

RESUMO

The cochlea's high sensitivity stems from the active process of outer hair cells, which possess two force-generating mechanisms: active hair-bundle motility elicited by Ca(2+) influx and somatic motility mediated by the voltage-sensitive protein prestin. Although interference with prestin has demonstrated a role for somatic motility in the active process, it remains unclear whether hair-bundle motility contributes in vivo. We selectively perturbed the two mechanisms by infusing substances into the endolymph or perilymph of the chinchilla's cochlea and then used scanning laser interferometry to measure vibrations of the basilar membrane. Blocking somatic motility, damaging the tip links of hair bundles, or depolarizing hair cells eliminated amplification. While reducing amplification to a lesser degree, pharmacological perturbation of active hair-bundle motility diminished or eliminated the nonlinear compression underlying the broad dynamic range associated with normal hearing. The results suggest that active hair-bundle motility plays a significant role in the amplification and compressive nonlinearity of the cochlea.


Assuntos
Cóclea/fisiologia , Células Ciliadas Auditivas/citologia , Audição , Animais , Membrana Basilar/metabolismo , Fenômenos Biomecânicos , Cálcio/metabolismo , Chinchila , Cóclea/metabolismo , Células Ciliadas Auditivas Externas/metabolismo , Hipóxia , Interferometria/métodos , Lasers , Masculino , Mecanotransdução Celular , Modelos Estatísticos
18.
Rep Prog Phys ; 77(7): 076601, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25006839

RESUMO

Most sounds of interest consist of complex, time-dependent admixtures of tones of diverse frequencies and variable amplitudes. To detect and process these signals, the ear employs a highly nonlinear, adaptive, real-time spectral analyzer: the cochlea. Sound excites vibration of the eardrum and the three miniscule bones of the middle ear, the last of which acts as a piston to initiate oscillatory pressure changes within the liquid-filled chambers of the cochlea. The basilar membrane, an elastic band spiraling along the cochlea between two of these chambers, responds to these pressures by conducting a largely independent traveling wave for each frequency component of the input. Because the basilar membrane is graded in mass and stiffness along its length, however, each traveling wave grows in magnitude and decreases in wavelength until it peaks at a specific, frequency-dependent position: low frequencies propagate to the cochlear apex, whereas high frequencies culminate at the base. The oscillations of the basilar membrane deflect hair bundles, the mechanically sensitive organelles of the ear's sensory receptors, the hair cells. As mechanically sensitive ion channels open and close, each hair cell responds with an electrical signal that is chemically transmitted to an afferent nerve fiber and thence into the brain. In addition to transducing mechanical inputs, hair cells amplify them by two means. Channel gating endows a hair bundle with negative stiffness, an instability that interacts with the motor protein myosin-1c to produce a mechanical amplifier and oscillator. Acting through the piezoelectric membrane protein prestin, electrical responses also cause outer hair cells to elongate and shorten, thus pumping energy into the basilar membrane's movements. The two forms of motility constitute an active process that amplifies mechanical inputs, sharpens frequency discrimination, and confers a compressive nonlinearity on responsiveness. These features arise because the active process operates near a Hopf bifurcation, the generic properties of which explain several key features of hearing. Moreover, when the gain of the active process rises sufficiently in ultraquiet circumstances, the system traverses the bifurcation and even a normal ear actually emits sound. The remarkable properties of hearing thus stem from the propagation of traveling waves on a nonlinear and excitable medium.


Assuntos
Membrana Basilar/fisiologia , Células Ciliadas Auditivas/fisiologia , Audição/fisiologia , Líquidos Labirínticos/fisiologia , Mecanotransdução Celular/fisiologia , Modelos Biológicos , Animais , Orelha Interna/fisiologia , Humanos , Reologia/métodos , Estresse Mecânico , Viscosidade
19.
eNeuro ; 11(9)2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39160069

RESUMO

Musicians can have better abilities to understand speech in adverse condition such as background noise than non-musicians. However, the neural mechanisms behind such enhanced behavioral performances remain largely unclear. Studies have found that the subcortical frequency-following response to the fundamental frequency of speech and its higher harmonics (speech-FFR) may be involved since it is larger in people with musical training than in those without. Recent research has shown that the speech-FFR consists of a cortical contribution in addition to the subcortical sources. Both the subcortical and the cortical contribution are modulated by selective attention to one of two competing speakers. However, it is unknown whether the strength of the cortical contribution to the speech-FFR, or its attention modulation, is influenced by musical training. Here we investigate these issues through magnetoencephalographic (MEG) recordings of 52 subjects (18 musicians, 25 non-musicians, and 9 neutral participants) listening to two competing male speakers while selectively attending one of them. The speech-in-noise comprehension abilities of the participants were not assessed. We find that musicians and non-musicians display comparable cortical speech-FFRs and additionally exhibit similar subject-to-subject variability in the response. Furthermore, we also do not observe a difference in the modulation of the neural response through selective attention between musicians and non-musicians. Moreover, when assessing whether the cortical speech-FFRs are influenced by particular aspects of musical training, no significant effects emerged. Taken together, we did not find any effect of musical training on the cortical speech-FFR.


Assuntos
Estimulação Acústica , Atenção , Magnetoencefalografia , Música , Percepção da Fala , Humanos , Masculino , Atenção/fisiologia , Percepção da Fala/fisiologia , Feminino , Adulto Jovem , Adulto , Córtex Cerebral/fisiologia
20.
Audiol Res ; 14(2): 264-279, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38525685

RESUMO

BACKGROUND: The Chear open-set performance test (COPT), which uses a carrier phrase followed by a monosyllabic test word, is intended for clinical assessment of speech recognition, evaluation of hearing-device performance, and the fine-tuning of hearing devices for speakers of British English. This paper assesses practice effects, test-retest reliability, and the variability across lists of the COPT. METHOD: In experiment 1, 16 normal-hearing participants were tested using an initial version of the COPT, at three speech-to-noise ratios (SNRs). Experiment 2 used revised COPT lists, with items swapped between lists to reduce differences in difficulty across lists. In experiment 3, test-retest repeatability was assessed for stimuli presented in quiet, using 15 participants with sensorineural hearing loss. RESULTS: After administration of a single practice list, no practice effects were evident. The critical difference between scores for two lists was about 2 words (out of 15) or 5 phonemes (out of 50). The mean estimated SNR required for 74% words correct was -0.56 dB, with a standard deviation across lists of 0.16 dB. For the participants with hearing loss tested in quiet, the critical difference between scores for two lists was about 3 words (out of 15) or 6 phonemes (out of 50).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA