Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 446
Filtrar
Mais filtros

Medicinas Complementares
Intervalo de ano de publicação
1.
Trends Hear ; 27: 23312165231205719, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37807857

RESUMO

While each place on the cochlea is most sensitive to a specific frequency, it will generally respond to a sufficiently high-level stimulus over a wide range of frequencies. This spread of excitation can introduce errors in clinical threshold estimation during a diagnostic auditory brainstem response (ABR) exam. Off-frequency cochlear excitation can be mitigated through the addition of masking noise to the test stimuli, but introducing a masker increases the already long test times of the typical ABR exam. Our lab has recently developed the parallel ABR (pABR) paradigm to speed up test times by utilizing randomized stimulus timing to estimate the thresholds for multiple frequencies simultaneously. There is reason to believe parallel presentation of multiple frequencies provides masking effects and improves place specificity while decreasing test times. Here, we use two computational models of the auditory periphery to characterize the predicted effect of parallel presentation on place specificity in the auditory nerve. We additionally examine the effect of stimulus rate and level. Both models show the pABR is at least as place specific as standard methods, with an improvement in place specificity for parallel presentation (vs. serial) at high levels, especially at high stimulus rates. When simulating hearing impairment in one of the models, place specificity was also improved near threshold. Rather than a tradeoff, this improved place specificity would represent a secondary benefit to the pABR's faster test times.


Assuntos
Potenciais Evocados Auditivos do Tronco Encefálico , Mascaramento Perceptivo , Humanos , Limiar Auditivo/fisiologia , Mascaramento Perceptivo/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Ruído , Tronco Encefálico/fisiologia , Estimulação Acústica
2.
J Acoust Soc Am ; 153(4): 2482, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-37092950

RESUMO

Physiological and psychoacoustic studies of the medial olivocochlear reflex (MOCR) in humans have often relied on long duration elicitors (>100 ms). This is largely due to previous research using otoacoustic emissions (OAEs) that found multiple MOCR time constants, including time constants in the 100s of milliseconds, when elicited by broadband noise. However, the effect of the duration of a broadband noise elicitor on similar psychoacoustic tasks is currently unknown. The current study measured the effects of ipsilateral broadband noise elicitor duration on psychoacoustic gain reduction estimated from a forward-masking paradigm. Analysis showed that both masker type and elicitor duration were significant main effects, but no interaction was found. Gain reduction time constants were ∼46 ms for the masker present condition and ∼78 ms for the masker absent condition (ranging from ∼29 to 172 ms), both similar to the fast time constants reported in the OAE literature (70-100 ms). Maximum gain reduction was seen for elicitor durations of ∼200 ms. This is longer than the 50-ms duration which was found to produce maximum gain reduction with a tonal on-frequency elicitor. Future studies of gain reduction may use 150-200 ms broadband elicitors to maximally or near-maximally stimulate the MOCR.


Assuntos
Cóclea , Emissões Otoacústicas Espontâneas , Humanos , Psicoacústica , Cóclea/fisiologia , Emissões Otoacústicas Espontâneas/fisiologia , Reflexo/fisiologia , Fatores de Tempo , Estimulação Acústica , Mascaramento Perceptivo/fisiologia
3.
Sci Rep ; 12(1): 1452, 2022 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-35087148

RESUMO

Tinnitus therapies have been combined with the use of varieties of sound/noise. For masking external sounds, location of the masker in space is important; however, effects of the spatial location of the masker on tinnitus are less understood. We aimed to test whether a masking sound location would affect the perception level of simulated tinnitus. The 4 kHz simulated tinnitus was induced in the right ear of healthy volunteers through an open-type earphone. White noise was presented to the right ear using a single-sided headphone or a speaker positioned on the right side at a distance of 1.8 m for masking the simulated tinnitus. In other sessions, monaurally recorded noise localized within the head (inside-head noise) or binaurally recorded noise localized outside the head (outside-head noise) was separately presented from a dual-sided headphone. The noise presented from a distant speaker and the outside-head noise masked the simulated tinnitus in 71.1% and 77.1% of measurements at a lower intensity compared to the noise beside the ear and the inside-head noise, respectively. In conclusion, spatial information regarding the masking noise may play a role in reducing the perception level of simulated tinnitus. Binaurally recorded sounds may be beneficial for an acoustic therapy of tinnitus.


Assuntos
Estimulação Acústica/métodos , Ruído , Mascaramento Perceptivo/fisiologia , Localização de Som/fisiologia , Zumbido/terapia , Adulto , Feminino , Voluntários Saudáveis , Humanos , Masculino , Zumbido/diagnóstico , Zumbido/fisiopatologia , Adulto Jovem
4.
Sci Rep ; 11(1): 15117, 2021 07 23.
Artigo em Inglês | MEDLINE | ID: mdl-34302032

RESUMO

Our acoustic environment contains a plethora of complex sounds that are often in motion. To gauge approaching danger and communicate effectively, listeners need to localize and identify sounds, which includes determining sound motion. This study addresses which acoustic cues impact listeners' ability to determine sound motion. Signal envelope (ENV) cues are implicated in both sound motion tracking and stimulus intelligibility, suggesting that these processes could be competing for sound processing resources. We created auditory chimaera from speech and noise stimuli and varied the number of frequency bands, effectively manipulating speech intelligibility. Normal-hearing adults were presented with stationary or moving chimaeras and reported perceived sound motion and content. Results show that sensitivity to sound motion is not affected by speech intelligibility, but shows a clear difference for original noise and speech stimuli. Further, acoustic chimaera with speech-like ENVs which had intelligible content induced a strong bias in listeners to report sounds as stationary. Increasing stimulus intelligibility systematically increased that bias and removing intelligible content reduced it, suggesting that sound content may be prioritized over sound motion. These findings suggest that sound motion processing in the auditory system can be biased by acoustic parameters related to speech intelligibility.


Assuntos
Percepção Auditiva/fisiologia , Percepção de Movimento/fisiologia , Inteligibilidade da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Limiar Auditivo/fisiologia , Sinais (Psicologia) , Feminino , Audição/fisiologia , Testes Auditivos/métodos , Humanos , Masculino , Movimento (Física) , Ruído , Mascaramento Perceptivo/fisiologia , Som , Acústica da Fala , Percepção da Fala/fisiologia , Adulto Jovem
5.
Brain Res ; 1759: 147385, 2021 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-33631210

RESUMO

Speech perception requires the grouping of acoustic information into meaningful phonetic units via the process of categorical perception (CP). Environmental masking influences speech perception and CP. However, it remains unclear at which stage of processing (encoding, decision, or both) masking affects listeners' categorization of speech signals. The purpose of this study was to determine whether linguistic interference influences the early acoustic-phonetic conversion process inherent to CP. To this end, we measured source level, event related brain potentials (ERPs) from auditory cortex (AC) and inferior frontal gyrus (IFG) as listeners rapidly categorized speech sounds along a /da/ to /ga/ continuum presented in three listening conditions: quiet, and in the presence of forward (informational masker) and time-reversed (energetic masker) 2-talker babble noise. Maskers were matched in overall SNR and spectral content and thus varied only in their degree of linguistic interference (i.e., informational masking). We hypothesized a differential effect of informational versus energetic masking on behavioral and neural categorization responses, where we predicted increased activation of frontal regions when disambiguating speech from noise, especially during lexical-informational maskers. We found (1) informational masking weakens behavioral speech phoneme identification above and beyond energetic masking; (2) low-level AC activity not only codes speech categories but is susceptible to higher-order lexical interference; (3) identifying speech amidst noise recruits a cross hemispheric circuit (ACleft â†’ IFGright) whose engagement varies according to task difficulty. These findings provide corroborating evidence for top-down influences on the early acoustic-phonetic analysis of speech through a coordinated interplay between frontotemporal brain areas.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Mascaramento Perceptivo/fisiologia , Tempo de Reação/fisiologia , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
6.
Neuroimage ; 223: 117319, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32882376

RESUMO

There is increasing evidence that the hippocampus is involved in language production and verbal communication, although little is known about its possible role. According to one view, hippocampus contributes semantic memory to spoken language. Alternatively, hippocampus is involved in the processing the (mis)match between expected sensory consequences of speaking and the perceived speech feedback. In the current study, we re-analysed functional magnetic resonance (fMRI) data of two overt picture-naming studies to test whether hippocampus is involved in speech production and, if so, whether the results can distinguish between a "pure memory" versus a "prediction" account of hippocampal involvement. In both studies, participants overtly named pictures during scanning while hearing their own speech feedback unimpededly or impaired by a superimposed noise mask. Results showed decreased hippocampal activity when speech feedback was impaired, compared to when feedback was unimpeded. Further, we found increased functional coupling between auditory cortex and hippocampus during unimpeded speech feedback, compared to impaired feedback. Finally, we found significant functional coupling between a hippocampal/supplementary motor area (SMA) interaction term and auditory cortex, anterior cingulate cortex and cerebellum during overt picture naming, but not during listening to one's own pre-recorded voice. These findings indicate that hippocampus plays a role in speech production that is in accordance with a "prediction" view of hippocampal functioning.


Assuntos
Hipocampo/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Mascaramento Perceptivo/fisiologia , Fala
7.
J Neurophysiol ; 123(2): 695-706, 2020 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-31891521

RESUMO

The central mechanisms underlying binaural unmasking for spectrally overlapping concurrent sounds, which are unresolved in the peripheral auditory system, remain largely unknown. In this study, frequency-following responses (FFRs) to two binaurally presented independent narrowband noises (NBNs) with overlapping spectra were recorded simultaneously in the inferior colliculus (IC) and auditory cortex (AC) in anesthetized rats. The results showed that for both IC FFRs and AC FFRs, introducing an interaural time difference (ITD) disparity between the two concurrent NBNs enhanced the representation fidelity, reflected by the increased coherence between the responses evoked by double-NBN stimulation and the responses evoked by single NBNs. The ITD disparity effect varied across frequency bands, being more marked for higher frequency bands in the IC and lower frequency bands in the AC. Moreover, the coherence between IC responses and AC responses was also enhanced by the ITD disparity, and the enhancement was most prominent for low-frequency bands and the IC and the AC on the same side. These results suggest a critical role of the ITD cue in the neural segregation of spectrotemporally overlapping sounds.NEW & NOTEWORTHY When two spectrally overlapped narrowband noises are presented at the same time with the same sound-pressure level, they mask each other. Introducing a disparity in interaural time difference between these two narrowband noises improves the accuracy of the neural representation of individual sounds in both the inferior colliculus and the auditory cortex. The lower frequency signal transformation from the inferior colliculus to the auditory cortex on the same side is also enhanced, showing the effect of binaural unmasking.


Assuntos
Estimulação Acústica , Córtex Auditivo/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Potenciais Evocados Auditivos/fisiologia , Colículos Inferiores/fisiologia , Mascaramento Perceptivo/fisiologia , Animais , Comportamento Animal/fisiologia , Eletrocorticografia , Masculino , Ratos , Ratos Sprague-Dawley , Fatores de Tempo
8.
PLoS One ; 14(11): e0223075, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31689327

RESUMO

Previous research has consistently shown that for sounds varying in intensity over time, the beginning of the sound is of higher importance for the perception of loudness than later parts (primacy effect). However, in all previous studies, the target sounds were presented in quiet, and at a fixed average sound level. In the present study, temporal loudness weights for a time-varying narrowband noise were investigated in the presence of a continuous bandpass-filtered background noise and the average sound levels of the target stimuli were varied across a range of 60 dB. Pronounced primacy effects were observed in all conditions and there were no significant differences between the temporal weights observed in the conditions in quiet and in background noise. Within the conditions in background noise, there was a significant effect of the sound level on the pattern of weights, which was mainly caused by a slight trend for increased weights at the end of the sounds ("recency effect") in the condition with lower average level. No such effect was observed for the in-quiet conditions. Taken together, the observed primacy effect is largely independent of masking as well as of sound level. Compatible with this conclusion, the observed primacy effects in quiet and in background noise can be well described by an exponential decay function using parameters based on previous studies. Simulations using a model for the partial loudness of time-varying sounds in background noise showed that the model does not predict the observed temporal loudness weights.


Assuntos
Percepção Sonora/fisiologia , Estimulação Acústica , Adolescente , Adulto , Limiar Auditivo/fisiologia , Feminino , Humanos , Masculino , Modelos Psicológicos , Ruído , Mascaramento Perceptivo/fisiologia , Psicoacústica , Som , Fatores de Tempo , Adulto Jovem
9.
J Speech Lang Hear Res ; 62(10): 3741-3751, 2019 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-31619115

RESUMO

Purpose Working memory capacity and language ability modulate speech reception; however, the respective roles of peripheral and cognitive processing are unclear. The contribution of individual differences in these abilities to utilization of spatial cues when separating speech from informational and energetic masking backgrounds in children has not yet been determined. Therefore, this study explored whether speech reception in children is modulated by environmental factors, such as the type of background noise and spatial configuration of target and noise sources, and individual differences in the cognitive and linguistic abilities of listeners. Method Speech reception thresholds were assessed in 39 children aged 5-7 years in simulated school listening environments. Speech reception thresholds of target sentences spoken by an adult male consisting of number and color combinations were measured using an adaptive procedure, with speech-shaped white noise and single-talker backgrounds that were either collocated (target and back-ground at 0°) or spatially separated (target at 0°, background noise at 90° to the right). Spatial release from masking was assessed alongside memory span and expressive language. Results and Conclusion Significant main effect results showed that speech reception thresholds were highest for informational maskers and collocated conditions. Significant interactions indicated that individual differences in memory span and language ability were related to spatial release from masking advantages. Specifically, individual differences in memory span and language were related to the utilization of spatial cues in separated conditions. Language differences were related to auditory stream segregation abilities in collocated conditions that lack helpful spatial cues, pointing to the utilization of language processes to make up for losses in spatial information.


Assuntos
Individualidade , Memória de Curto Prazo/fisiologia , Mascaramento Perceptivo/fisiologia , Processamento Espacial/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Limiar Auditivo , Criança , Linguagem Infantil , Pré-Escolar , Sinais (Psicologia) , Feminino , Humanos , Linguística , Masculino , Ruído , África do Sul , Teste do Limiar de Recepção da Fala
10.
J Neurophysiol ; 122(2): 601-615, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31141449

RESUMO

When we grow older, understanding speech in noise becomes more challenging. Research has demonstrated the role of auditory temporal and cognitive deficits in these age-related speech-in-noise difficulties. To better understand the underlying neural mechanisms, we recruited young, middle-aged, and older normal-hearing adults and investigated the interplay between speech understanding, cognition, and neural tracking of the speech envelope using electroencephalography. The stimuli consisted of natural speech masked by speech-weighted noise or a competing talker and were presented at several subject-specific speech understanding levels. In addition to running speech, we recorded auditory steady-state responses at low modulation frequencies to assess the effect of age on nonspeech sounds. The results show that healthy aging resulted in a supralinear increase in the speech reception threshold, i.e., worse speech understanding, most pronounced for the competing talker. Similarly, advancing age was associated with a supralinear increase in envelope tracking, with a pronounced enhancement for older adults. Additionally, envelope tracking was found to increase with speech understanding, most apparent for older adults. Because we found that worse cognitive scores were associated with enhanced envelope tracking, our results support the hypothesis that enhanced envelope tracking in older adults is the result of a higher activation of brain regions for processing speech, compared with younger adults. From a cognitive perspective, this could reflect the inefficient use of cognitive resources, often observed in behavioral studies. Interestingly, the opposite effect of age was found for auditory steady-state responses, suggesting a complex interplay of different neural mechanisms with advancing age.NEW & NOTEWORTHY We measured neural tracking of the speech envelope across the adult lifespan and found a supralinear increase in envelope tracking with age. Using a more ecologically valid approach than auditory steady-state responses, we found that young and older, as well as middle-aged, normal-hearing adults showed an increase in envelope tracking with increasing speech understanding and that this association is stronger for older adults.


Assuntos
Envelhecimento/fisiologia , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo/fisiologia , Psicolinguística , Adulto Jovem
11.
Sci Rep ; 9(1): 7605, 2019 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-31110202

RESUMO

The nature of interactions between the senses is a topic of intense interest in neuroscience, but an unresolved question is how sensory information from hearing and vision are combined when the two senses interact. A problem for testing auditory-visual interactions is devising stimuli and tasks that are equivalent in both modalities. Here we report a novel paradigm in which we first equated the discriminability of the stimuli in each modality, then tested how a distractor in the other modality affected performance. Participants discriminated pairs of amplitude-modulated tones or size-modulated visual objects in the form of a cuboid shape, alone or when a similarly modulated distractor stimulus of the other modality occurred with one of the pair. Discrimination of sound modulation depth was affected by a modulated cuboid only when their modulation rates were the same. In contrast, discrimination of cuboid modulation depth was little affected by an equivalently modulated sound. Our results suggest that what observers perceive when auditory and visual signals interact is not simply determined by the discriminability of the individual sensory inputs, but also by factors that increase the perceptual binding of these inputs, such as temporal synchrony.


Assuntos
Audição/fisiologia , Visão Ocular/fisiologia , Estimulação Acústica/métodos , Adulto , Percepção Auditiva/fisiologia , Feminino , Testes Auditivos/métodos , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Estimulação Luminosa/métodos , Som , Percepção Visual/fisiologia , Adulto Jovem
12.
J Speech Lang Hear Res ; 62(4): 1068-1081, 2019 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-30986135

RESUMO

Purpose Understanding speech in complex realistic acoustic environments requires effort. In everyday listening situations, speech quality is often degraded due to adverse acoustics, such as excessive background noise level (BNL) and reverberation time (RT), or talker characteristics such as foreign accent ( Mattys, Davis, Bradlow, & Scott, 2012 ). In addition to factors affecting the quality of the input acoustic signals, listeners' individual characteristics such as language abilities can also make it more difficult and effortful to understand speech. Based on the Framework for Understanding Effortful Listening ( Pichora-Fuller et al., 2016 ), factors such as adverse acoustics, talker accent, and listener language abilities can all contribute to increasing listening effort. In this study, using both a dual-task paradigm and a self-report questionnaire, we seek to understand listening effort in a wide range of realistic classroom acoustic conditions as well as varying talker accent and listener English proficiency. Method One hundred fifteen native and nonnative adult listeners with normal hearing were tested in a dual task of speech comprehension and adaptive pursuit rotor (APR) under 15 acoustic conditions from combinations of BNLs and RTs. Listeners provided responses on the NASA Task Load Index (TLX) questionnaire immediately after completing the dual task under each acoustic condition. The NASA TLX surveyed 6 dimensions of perceived listening effort: mental demand, physical demand, temporal demand, effort, frustration, and perceived performance. Fifty-six listeners were tested with speech produced by native American English talkers; the other 59 listeners, with speech from native Mandarin Chinese talkers. Based on their 1st language learned during childhood, 3 groups of listeners were recruited: listeners who were native English speakers, native Mandarin Chinese speakers, and native speakers of other languages (e.g., Hindu, Korean, and Portuguese). Results Listening effort was measured objectively through the APR task performance and subjectively using the NASA TLX questionnaire. Performance on the APR task did not vary with changing acoustic conditions, but it did suggest increased listening effort for native listeners of other languages compared to the 2 other listener groups. From the NASA TLX, listeners reported feeling more frustrated and less successful in understanding Chinese-accented speech. Nonnative listeners reported more listening effort (i.e., physical demand, temporal demand, and effort) than native listeners in speech comprehension under adverse acoustics. When listeners' English proficiency was controlled, higher BNL was strongly related to a decrease in perceived performance, whereas such relationship with RT was much weaker. Nonnative listeners who shared the foreign talkers' accent reported no change in listening effort, whereas other listeners reported more difficulty in understanding the accented speech. Conclusions Adverse acoustics required more effortful listening as measured subjectively with a self-report NASA TLX. This subjective scale was more sensitive than a dual task that involved speech comprehension, which was beyond sentence recall. It was better at capturing the negative impacts on listening effort from acoustic factors (i.e., both BNL and RT), talker accent, and listener language abilities.


Assuntos
Compreensão , Fonética , Esforço Físico/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Idioma , Masculino , Multilinguismo , Ruído , Mascaramento Perceptivo/fisiologia , Adulto Jovem
13.
J Speech Lang Hear Res ; 62(4): 853-867, 2019 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-30986136

RESUMO

Purpose Child phonologists have long been interested in how tightly speech input constrains the speech production capacities of young children, and the question acquires clinical significance when children with hearing loss are considered. Children with sensorineural hearing loss often show differences in the spectral and temporal structures of their speech production, compared to children with normal hearing. The current study was designed to investigate the extent to which this problem can be explained by signal degradation. Method Ten 5-year-olds with normal hearing were recorded imitating 120 three-syllable nonwords presented in unprocessed form and as noise-vocoded signals. Target segments consisted of fricatives, stops, and vowels. Several measures were made: 2 duration measures (voice onset time and fricative length) and 4 spectral measures involving 2 segments (1st and 3rd moments of fricatives and 1st and 2nd formant frequencies for the point vowels). Results All spectral measures were affected by signal degradation, with vowel production showing the largest effects. Although a change in voice onset time was observed with vocoded signals for /d/, voicing category was not affected. Fricative duration remained constant. Conclusions Results support the hypothesis that quality of the input signal constrains the speech production capacities of young children. Consequently, it can be concluded that the production problems of children with hearing loss-including those with cochlear implants-can be explained to some extent by the degradation in the signal they hear. However, experience with both speech perception and production likely plays a role as well.


Assuntos
Perda Auditiva Neurossensorial/fisiopatologia , Fonética , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Pré-Escolar , Feminino , Humanos , Masculino , Ruído , Mascaramento Perceptivo/fisiologia , Tempo de Reação , Medida da Produção da Fala
14.
Exp Psychol ; 66(1): 1-11, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30777512

RESUMO

The current research employed a classic irrelevant sound effect paradigm and investigated the talker-specific content of the irrelevant speech. Specifically, we aimed to determine if the participants' familiarity with the irrelevant speech's talker affected the magnitude of the irrelevant sound effect. Experiment 1 was an exploration of talker familiarity established in a natural listening environment (i.e., a university classroom) in which we manipulated the participants' relationships with the talker. In Experiment 2, we manipulated the participants' familiarity with the talker via 4 days of controlled exposure to the target talker's audio recordings. For both Experiments 1 and 2, a robust effect of irrelevant speech was found; however, regardless of the talker manipulation, talker familiarity did not influence the size of the effect. We interpreted the results within the processing view of the auditory distraction effect and highlighted the notion that talker familiarity may be more vulnerable than once thought.


Assuntos
Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Feminino , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Adulto Jovem
15.
Neuropsychologia ; 124: 108-116, 2019 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-30659864

RESUMO

The perceptual separation between a signal speech and a competing speech (masker), induced by the precedence effect, plays an important role in releasing the signal speech from the masker, especially in a reverberant environment. The perceptual-separation-induced unmasking effect has been suggested to involve multiple cognitive processes, such as selective attention. However, whether listeners' spatial attention modulate the perceptual-separation-induced unmasking effect is not clear. The present study investigated how perceptual separation and auditory spatial attention interact with each other to facilitate speech perception under a simulated noisy and reverberant environment by analyzing the cortical auditory evoked potentials to the signal speech. The results showed that the N1 wave was significantly enhanced by perceptual separation between the signal and masker regardless of whether the participants' spatial attention was directed to the signal or not. However, the P2 wave was significantly enhanced by perceptual separation only when the participants attended to the signal speech. The results indicate that the perceptual-separation-induced facilitation of P2 needs more attentional resource than that of N1. The results also showed that the signal speech caused an enhanced N1 in the contralateral hemisphere regardless of whether participants' attention was directed to the signal or not. In contrast, the signal speech caused an enhanced P2 in the contralateral hemisphere only when the participant attended to the signal. The results indicate that the hemispheric distribution of N1 is mainly affected by the perceptual features of the acoustic stimuli, while that of P2 is affected by the listeners' attentional status.


Assuntos
Atenção/fisiologia , Mascaramento Perceptivo/fisiologia , Processamento Espacial/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Ruído , Adulto Jovem
16.
Ear Hear ; 40(4): 1009-1015, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30557224

RESUMO

OBJECTIVES: The purpose of this study was to obtain an electrophysiological analog of masking release using speech-evoked cortical potentials in steady and modulated maskers and to relate this masking release to behavioral measures for the same stimuli. The hypothesis was that the evoked potentials can be tracked to a lower stimulus level in a modulated masker than in a steady masker and that the magnitude of this electrophysiological masking release is of the same order as that of the behavioral masking release for the same stimuli. DESIGN: Cortical potentials evoked by an 80-ms /ba/ stimulus were measured in two steady maskers (30 and 65 dB SPL), and in a masker that modulated between these two levels at a rate of 25 Hz. In each masker, a level series was undertaken to determine electrophysiological threshold. Behavioral detection thresholds were determined in the same maskers using an adaptive tracking procedure. Masking release was defined as the difference between signal thresholds measured in the steady 65-dB SPL masker and the modulated masker. A total of 23 normal-hearing adults participated. RESULTS: Electrophysiological thresholds were uniformly elevated relative to behavioral thresholds by about 6.5 dB. However, the magnitude of masking release was about 13.5 dB for both measurement domains. CONCLUSIONS: Electrophysiological measures of masking release using speech-evoked cortical auditory evoked potentials correspond closely to behavioral estimates for the same stimuli. This suggests that objective measures based on electrophysiological techniques can be used to reliably gauge aspects of temporal processing ability.


Assuntos
Potenciais Evocados Auditivos/fisiologia , Mascaramento Perceptivo/fisiologia , Detecção de Sinal Psicológico , Fala , Estimulação Acústica , Feminino , Voluntários Saudáveis , Humanos , Masculino , Desempenho Psicomotor , Adulto Jovem
17.
J Speech Lang Hear Res ; 61(12): 3113-3126, 2018 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-30515519

RESUMO

Purpose: This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes. Method: A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory-visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise. Results: A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding. Conclusions: The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.


Assuntos
Estimulação Acústica/psicologia , Auxiliares de Audição/psicologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Audiometria da Fala , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Ensaios Clínicos Controlados não Aleatórios como Assunto , Adulto Jovem
18.
J Vis ; 18(11): 18, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-30372728

RESUMO

Is perception of translucence based on estimations of scattering and absorption of light or on statistical pseudocues associated with familiar materials? We compared perceptual performance with real and computer-generated stimuli. Real stimuli were glasses of milky tea. Milk predominantly scatters light and tea absorbs it, but since the tea absorbs less as the milk concentration increases, the effects of milkiness and strength on scattering and absorption are not independent. Conversely, computer-generated stimuli were glasses of "milky tea" in which absorption and scattering were independently manipulated. Observers judged tea concentrations regardless of milk concentrations, or vice versa. Maximum-likelihood conjoint measurement was used to estimate the contributions of each physical component-concentrations of milk and tea, or amounts of scattering and absorption-to perceived milkiness or tea strength. Separability of the two physical dimensions was better for real than for computer-generated teas, suggesting that interactions between scattering and absorption were correctly accounted for in perceptual unmixing, but unmixing was always imperfect. Since the real and rendered stimuli represent different physical processes and therefore differ in their image statistics, perceptual judgments with these stimuli allowed us to identify particular pseudocues (presumably learned with real stimuli) that explain judgments with both stimulus sets.


Assuntos
Absorção de Radiação/fisiologia , Leite/química , Mascaramento Perceptivo/fisiologia , Espalhamento de Radiação , Chá/química , Animais , Humanos , Luz , Fenômenos Físicos
19.
J Acoust Soc Am ; 144(1): 267, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-30075693

RESUMO

In realistic listening environments, speech perception requires grouping together audible fragments of speech, filling in missing information, and segregating the glimpsed target from the background. The purpose of this study was to determine the extent to which age-related difficulties with these tasks can be explained by declines in glimpsing, phonemic restoration, and/or speech segregation. Younger and older adults with normal hearing listened to sentences interrupted with silence or envelope-modulated noise, presented either in quiet or with a competing talker. Older adults were poorer than younger adults at recognizing keywords based on short glimpses but benefited more when envelope-modulated noise filled silent intervals. Recognition declined with a competing talker but this effect did not interact with age. Results of cognitive tasks indicated that faster processing speed and better visual-linguistic closure were predictive of better speech understanding. Taken together, these results suggest that age-related declines in speech recognition may be partially explained by difficulty grouping short glimpses of speech into a coherent message.


Assuntos
Fatores Etários , Audição/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Idoso , Idoso de 80 Anos ou mais , Percepção Auditiva/fisiologia , Cognição/fisiologia , Compreensão/fisiologia , Feminino , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo/fisiologia
20.
Sci Rep ; 8(1): 11910, 2018 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-30093692

RESUMO

Previous studies in speech production and acquisition have mainly focused on how feedback vs. goals and feedback vs. prediction regulate learning and speech control. The present study investigated the less studied mechanism-prediction vs. goals in the context of adult Mandarin speakers' acquisition of non-native sounds, using an auditory feedback masking paradigm. Participants were asked to learn two types of non-native vowels: /ø/ and /ɵ/-the former being less similar than the latter to Mandarin vowels, either in feedback available or feedback masked conditions. The results show that there was no significant improvement in learning the two targets when auditory feedback was masked. This suggests that motor-based prediction could not directly compare with sensory goals for adult second language acquisition. Furthermore, auditory feedback can help achieve learning only if the competition between prediction and goals is minimal, i.e., when target sounds are distinct from existing sounds in one's native speech. The results suggest motor-based prediction and sensory goals may share a similar neural representational format, which could result in a competing relation in neural recourses in speech learning. The feedback can conditionally overcome such interference between prediction and goals. Hence, the present study further probed the functional relations among key components (prediction, goals and feedback) of sensorimotor integration in speech learning.


Assuntos
Retroalimentação Sensorial/fisiologia , Idioma , Aprendizagem/fisiologia , Fonética , Acústica da Fala , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , China , Feminino , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Fala/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA