Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ear Hear ; 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38768048

RESUMO

OBJECTIVE: Children with hearing loss experience greater difficulty understanding speech in the presence of noise and reverberation relative to their normal hearing peers despite provision of appropriate amplification. The fidelity of fundamental frequency of voice (f0) encoding-a salient temporal cue for understanding speech in noise-could play a significant role in explaining the variance in abilities among children. However, the nature of deficits in f0 encoding and its relationship with speech understanding are poorly understood. To this end, we evaluated the influence of frequency-specific f0 encoding on speech perception abilities of children with and without hearing loss in the presence of noise and/or reverberation. METHODS: In 14 school-aged children with sensorineural hearing loss fitted with hearing aids and 29 normal hearing peers, envelope following responses (EFRs) were elicited by the vowel /i/, modified to estimate f0 encoding in low (<1.1 kHz) and higher frequencies simultaneously. EFRs to /i/ were elicited in quiet, in the presence of speech-shaped noise at +5 dB signal to noise ratio, with simulated reverberation time of 0.62 sec, as well as both noise and reverberation. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape while children watched a silent movie with captions. Speech discrimination accuracy was measured using the University of Western Ontario Distinctive Features Differences test in each of the four acoustic conditions. Stimuli for EFR recordings and speech discrimination were presented monaurally. RESULTS: Both groups of children demonstrated a frequency-dependent dichotomy in the disruption of f0 encoding, as reflected in EFR amplitude and phase coherence. Greater disruption (i.e., lower EFR amplitudes and phase coherence) was evident in EFRs elicited by low frequencies due to noise and greater disruption was evident in EFRs elicited by higher frequencies due to reverberation. Relative to normal hearing peers, children with hearing loss demonstrated: (a) greater disruption of f0 encoding at low frequencies, particularly in the presence of reverberation, and (b) a positive relationship between f0 encoding at low frequencies and speech discrimination in the hardest listening condition (i.e., when both noise and reverberation were present). CONCLUSIONS: Together, these results provide new evidence for the persistence of suprathreshold temporal processing deficits related to f0 encoding in children despite the provision of appropriate amplification to compensate for hearing loss. These objectively measurable deficits may underlie the greater difficulty experienced by children with hearing loss.

2.
Clin Linguist Phon ; : 1-21, 2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38679889

RESUMO

Children with cochlear implants (CI) communicate in noisy environments, such as in classrooms, where multiple talkers and reverberation are present. Speakers compensate for noise via the 'Lombard effect'. The present study examined the Lombard effect on the intensity and duration of stressed vowels in the speech of children with Cochlear Implants (CIs) as compared to children with Normal Hearing (NH), focusing on the effects of speech-shaped noise (SSN) and speech-shaped noise with reverberation (SSN+Reverberation). The sample consisted of 7 children with CIs and 7 children with NH, aged 7-12 years. Regarding intensity, a) children with CIs produced stressed vowels with an overall greater intensity across acoustic conditions as compared to NH peers, b) both groups increased their stressed vowel intensity for all vowels from Quiet to both noise conditions, and c) children with NH further increased their intensity when reverberation was added to SSN, esp. for the vowel/u/. Regarding duration, longer stressed vowels were produced by children with CIs as compared to NH in Quiet and SSN conditions but the effect was retained only for the vowels/i/,/o/and/u/when reverberation was added to noise. The SSN+Reverberation condition induced systematic lengthening in stressed vowels for children with NH. Furthermore, although greater intensity and duration ratios of stressed/unstressed syllables were observed for children with NH as compared to CIs in Quiet condition, they diminished with noise. The differences observed across groups have implications for speaking in classroom noise.

3.
J Acoust Soc Am ; 155(2): 1559-1569, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38393738

RESUMO

This study examined the role of visual speech in providing release from perceptual masking in children by comparing visual speech benefit across conditions with and without a spatial separation cue. Auditory-only and audiovisual speech recognition thresholds in a two-talker speech masker were obtained from 21 children with typical hearing (7-9 years of age) using a color-number identification task. The target was presented from a loudspeaker at 0° azimuth. Masker source location varied across conditions. In the spatially collocated condition, the masker was also presented from the loudspeaker at 0° azimuth. In the spatially separated condition, the masker was presented from the loudspeaker at 0° azimuth and a loudspeaker at -90° azimuth, with the signal from the -90° loudspeaker leading the signal from the 0° loudspeaker by 4 ms. The visual stimulus (static image or video of the target talker) was presented at 0° azimuth. Children achieved better thresholds when the spatial cue was provided and when the visual cue was provided. Visual and spatial cue benefit did not differ significantly depending on the presence of the other cue. Additional studies are needed to characterize how children's preferential use of visual and spatial cues varies depending on the strength of each cue.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Criança , Humanos , Sinais (Psicologia) , Ruído , Audição
4.
J Acoust Soc Am ; 155(2): 1071-1085, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38341737

RESUMO

Children's speech understanding is vulnerable to indoor noise and reverberation: e.g., from classrooms. It is unknown how they develop the ability to use temporal acoustic cues, specifically amplitude modulation (AM) and voice onset time (VOT), which are important for perceiving distorted speech. Through three experiments, we investigated the typical development of AM depth detection in vowels (experiment I), categorical perception of VOT (experiment II), and consonant identification (experiment III) in quiet and in speech-shaped noise (SSN) and mild reverberation in 6- to 14-year-old children. Our findings suggested that AM depth detection using a naturally produced vowel at the rate of the fundamental frequency was particularly difficult for children and with acoustic distortions. While the VOT cue salience was monotonically attenuated with increasing signal-to-noise ratio of SSN, its utility for consonant discrimination was completely removed even under mild reverberation. The reverberant energy decay in distorting critical temporal cues provided further evidence that may explain the error patterns observed in consonant identification. By 11-14 years of age, children approached adult-like performance in consonant discrimination and identification under adverse acoustics, emphasizing the need for good acoustics for younger children as they develop auditory skills to process distorted speech in everyday listening environments.


Assuntos
Percepção da Fala , Voz , Adulto , Criança , Humanos , Adolescente , Ruído/efeitos adversos , Acústica , Fala
5.
J Acoust Soc Am ; 154(2): 751-762, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37556566

RESUMO

Web-based testing is an appealing option for expanding psychoacoustics research outside laboratory environments due to its simple logistics. For example, research participants partake in listening tasks using their own computer and audio hardware and can participate in a comfortable environment of their choice at their own pace. However, it is unknown how deviations from conventional in-lab testing affect data quality, particularly in binaural hearing tasks that traditionally require highly precise audio presentation. Here, we used an online platform to replicate two published in-lab experiments: lateralization to interaural time and level differences (ITD and ILD, experiment I) and dichotic and contralateral unmasking of speech (experiment II) in normal-hearing (NH) young adults. Lateralization data collected online were strikingly similar to in-lab results. Likewise, the amount of unmasking measured online and in-lab differed by less than 1 dB, although online participants demonstrated higher speech reception thresholds overall than those tested in-lab by up to ∼7 dB. Results from online participants who completed a hearing screening versus those who self-reported NH did not differ significantly. We conclude that web-based psychoacoustics testing is a viable option for assessing binaural hearing abilities among young NH adults and discuss important considerations for online study design.


Assuntos
Percepção da Fala , Adulto Jovem , Humanos , Psicoacústica , Audição , Percepção Auditiva , Internet
7.
Eur J Neurosci ; 58(2): 2547-2562, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37203275

RESUMO

Environmental noise and reverberation challenge speech understanding more significantly in children than in adults. However, the neural/sensory basis for the difference is poorly understood. We evaluated the impact of noise and reverberation on the neural processing of the fundamental frequency of voice (f0 )-an important cue to tag or recognize a speaker. In a group of 39 6- to 15-year-old children and 26 adults with normal hearing, envelope following responses (EFRs) were elicited by a male-spoken /i/ in quiet, noise, reverberation, and both noise and reverberation. Due to increased resolvability of harmonics at lower than higher vowel formants that may affect susceptibility to noise and/or reverberation, the /i/ was modified to elicit two EFRs: one initiated by the low frequency first formant (F1) and the other initiated by mid to high frequency second and higher formants (F2+) with predominantly resolved and unresolved harmonics, respectively. F1 EFRs were more susceptible to noise whereas F2+ EFRs were more susceptible to reverberation. Reverberation resulted in greater attenuation of F1 EFRs in adults than children, and greater attenuation of F2+ EFRs in older than younger children. Reduced modulation depth caused by reverberation and noise explained changes in F2+ EFRs but was not the primary determinant for F1 EFRs. Experimental data paralleled modelled EFRs, especially for F1. Together, data suggest that noise or reverberation influences the robustness of f0 encoding depending on the resolvability of vowel harmonics and that maturation of processing temporal/envelope information of voice is delayed in reverberation, particularly for low frequency stimuli.


Assuntos
Percepção da Fala , Humanos , Adulto , Masculino , Criança , Idoso , Adolescente , Percepção da Fala/fisiologia , Ruído , Fala
8.
Otol Neurotol ; 44(1): 21-25, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36509434

RESUMO

OBJECTIVE: Investigate hearing preservation and spatial hearing outcomes in children with TMPRSS3 mutations who received bilateral cochlear implantation. STUDY DESIGN AND METHODS: Longitudinal case series report. Two siblings (ages, 7 and 4 yr) with TMPRSS3 mutations with down-sloping audiograms received sequential bilateral cochlear implantation with hearing preservation with low-frequency acoustic amplification and high-frequency electrical stimulation. Spatial hearing, including speech perception and localization, was assessed at three time points: preoperative, postoperative of first and second cochlear implant (CI). RESULTS: Both children showed low-frequency hearing preservation in unaided, acoustic-only audiograms. Both children demonstrated improvements in speech perception in both quiet and noise after CI activations. The emergence of spatial hearing was observed. Each child's overall speech perception and spatial hearing when listening with bilateral CIs were within the range or better than published group data from children with bilateral CIs of other etiology. CONCLUSION: Bilateral cochlear implantation with hearing preservation is a viable option for managing hearing loss for pediatric patients with TMPRSS3 mutations.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Criança , Percepção da Fala/fisiologia , Audição/genética , Surdez/reabilitação , Proteínas de Membrana , Proteínas de Neoplasias , Serina Endopeptidases/genética
9.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649891

RESUMO

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Assuntos
Acústica , Percepção Auditiva , Atenção/fisiologia , Humanos , Estudos Prospectivos , Som
10.
Ear Hear ; 43(1): 101-114, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34133400

RESUMO

OBJECTIVES: To investigate the role of auditory cues for spatial release from masking (SRM) in children with bilateral cochlear implants (BiCIs) and compare their performance with children with normal hearing (NH). To quantify the contribution to speech intelligibility benefits from individual auditory cues: head shadow, binaural redundancy, and interaural differences; as well as from multiple cues: SRM and binaural squelch. To assess SRM using a novel approach of adaptive target-masker angular separation, which provides a more functionally relevant assessment in realistic complex auditory environments. DESIGN: Children fitted with BiCIs (N = 11) and with NH (N = 18) were tested in virtual acoustic space that was simulated using head-related transfer functions measured from individual children with BiCIs behind the ear and from a standard head and torso simulator for all NH children. In experiment I, by comparing speech reception thresholds across 4 test conditions that varied in target-masker spatial separation (colocated versus separated at 180°) and listening conditions (monaural versus binaural/bilateral listening), intelligibility benefits were derived for individual auditory cues for SRM. In experiment II, SRM was quantified using a novel measure to find the minimum angular separation (MAS) between the target and masker to achieve a fixed 20% intelligibility improvement. Target speech was fixed at either +90 or -90° azimuth on the side closer to the better ear (+90° for all NH children) and masker locations were adaptively varied. RESULTS: In experiment I, children with BiCIs as a group had smaller intelligibility benefits from head shadow than NH children. No group difference was observed in benefits from binaural redundancy or interaural difference cues. In both groups of children, individuals who gained a larger benefit from interaural differences relied less on monaural head shadow, and vice versa. In experiment II, all children with BiCIs demonstrated measurable MAS thresholds <180° and on average larger than that from NH children. Eight of 11 children with BiCIs and all NH children had a MAS threshold <90°, requiring interaural differences only to gain the target intelligibility benefit; whereas the other 3 children with BiCIs had a MAS between 120° and 137°, requiring monaural head shadow for SRM. CONCLUSIONS: When target and maskers were separated at 180° on opposing hemifields, children with BiCIs demonstrated greater intelligibility benefits from head shadow and interaural differences than previous literature showed with a smaller separation. Children with BiCIs demonstrated individual differences in using auditory cues for SRM. From the MAS thresholds, more than half of the children with BiCIs demonstrated robust access to interaural differences without needing additional monaural head shadow for SRM. Both experiments led to the conclusion that individualized fitting strategies in the bilateral devices may be warranted to maximize spatial hearing for children with BiCIs in complex auditory environments.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Criança , Audição , Humanos , Mascaramento Perceptivo , Inteligibilidade da Fala
11.
J Acoust Soc Am ; 150(5): 3263, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34852617

RESUMO

Understanding speech in noisy environments, such as classrooms, is a challenge for children. When a spatial separation is introduced between the target and masker, as compared to when both are co-located, children demonstrate intelligibility improvement of the target speech. Such intelligibility improvement is known as spatial release from masking (SRM). In most reverberant environments, binaural cues associated with the spatial separation are distorted; the extent to which such distortion will affect children's SRM is unknown. Two virtual acoustic environments with reverberation times between 0.4 s and 1.1 s were compared. SRM was measured using a spatial separation with symmetrically displaced maskers to maximize access to binaural cues. The role of informational masking in modulating SRM was investigated through voice similarity between the target and masker. Results showed that, contradictory to previous developmental findings on free-field SRM, children's SRM in reverberation has not yet reached maturity in the 7-12 years age range. When reducing reverberation, an SRM improvement was seen in adults but not in children. Our findings suggest that, even though school-age children have access to binaural cues that are distorted in reverberation, they demonstrate immature use of such cues for speech-in-noise perception, even in mild reverberation.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Acústica , Adulto , Criança , Humanos , Ruído/efeitos adversos , Instituições Acadêmicas , Inteligibilidade da Fala
12.
Children (Basel) ; 7(11)2020 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-33171753

RESUMO

The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.

13.
PLoS One ; 15(8): e0238125, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32822439

RESUMO

The majority of psychoacoustic research investigating sound localization has utilized stationary sources, yet most naturally occurring sounds are in motion, either because the sound source itself moves, or the listener does. In normal hearing (NH) listeners, previous research showed the extent to which sound duration and velocity impact the ability of listeners to detect sound movement. By contrast, little is known about how listeners with hearing impairments perceive moving sounds; the only study to date comparing the performance of NH and bilateral cochlear implant (BiCI) listeners has demonstrated significantly poorer performance on motion detection tasks in BiCI listeners. Cochlear implants, auditory protheses offered to profoundly deaf individuals for access to spoken language, retain the signal envelope (ENV), while discarding temporal fine structure (TFS) of the original acoustic input. As a result, BiCI users do not have access to low-frequency TFS cues, which have previously been shown to be crucial for sound localization in NH listeners. Instead, BiCI listeners seem to rely on ENV cues for sound localization, especially level cues. Given that NH and BiCI listeners differentially utilize ENV and TFS information, the present study aimed to investigate the usefulness of these cues for auditory motion perception. We created acoustic chimaera stimuli, which allowed us to test the relative contributions of ENV and TFS to auditory motion perception. Stimuli were either moving or stationary, presented to NH listeners in free field. The task was to track the perceived sound location. We found that removing low-frequency TFS reduces sensitivity to sound motion, and fluctuating speech envelopes strongly biased the judgment of sounds to be stationary. Our findings yield a possible explanation as to why BiCI users struggle to identify sound motion, and provide a first account of cues important to the functional aspect of auditory motion perception.


Assuntos
Percepção Auditiva/fisiologia , Percepção de Movimento/fisiologia , Localização de Som/fisiologia , Estimulação Acústica/métodos , Adulto , Limiar Auditivo/fisiologia , Implante Coclear/reabilitação , Implantes Cocleares , Sinais (Psicologia) , Feminino , Audição , Perda Auditiva/fisiopatologia , Testes Auditivos , Humanos , Masculino , Movimento (Física) , Pessoas com Deficiência Auditiva/reabilitação , Psicoacústica , Som , Percepção da Fala/fisiologia
14.
Front Syst Neurosci ; 14: 39, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32733212

RESUMO

Children localize sounds using binaural cues when navigating everyday auditory environments. While sensitivity to binaural cues reaches maturity by 8-10 years of age, large individual variability has been observed in the just-noticeable-difference (JND) thresholds for interaural time difference (ITD) among children in this age range. To understand the development of binaural sensitivity beyond JND thresholds, the "looking-while-listening" paradigm was adapted in this study to reveal the real-time decision-making behavior during ITD processing. Children ages 8-14 years with normal hearing (NH) and a group of young NH adults were tested. This novel paradigm combined eye gaze tracking with behavioral psychoacoustics to estimate ITD JNDs in a two-alternative forced-choice discrimination task. Results from simultaneous eye gaze recordings during ITD processing suggested that children had adult-like ITD JNDs, but they demonstrated immature decision-making strategies. While the time course of arriving at the initial fixation and final decision in providing a judgment of the ITD direction was similar, children exhibited more uncertainty than adults during decision-making. Specifically, children made more fixation changes, particularly when tested using small ITD magnitudes, between the target and non-target response options prior to finalizing a judgment. These findings suggest that, while children may exhibit adult-like sensitivity to ITDs, their eye gaze behavior reveals that the processing of this binaural cue is still developing through late childhood.

15.
J Speech Lang Hear Res ; 62(4): 1068-1081, 2019 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-30986135

RESUMO

Purpose Understanding speech in complex realistic acoustic environments requires effort. In everyday listening situations, speech quality is often degraded due to adverse acoustics, such as excessive background noise level (BNL) and reverberation time (RT), or talker characteristics such as foreign accent ( Mattys, Davis, Bradlow, & Scott, 2012 ). In addition to factors affecting the quality of the input acoustic signals, listeners' individual characteristics such as language abilities can also make it more difficult and effortful to understand speech. Based on the Framework for Understanding Effortful Listening ( Pichora-Fuller et al., 2016 ), factors such as adverse acoustics, talker accent, and listener language abilities can all contribute to increasing listening effort. In this study, using both a dual-task paradigm and a self-report questionnaire, we seek to understand listening effort in a wide range of realistic classroom acoustic conditions as well as varying talker accent and listener English proficiency. Method One hundred fifteen native and nonnative adult listeners with normal hearing were tested in a dual task of speech comprehension and adaptive pursuit rotor (APR) under 15 acoustic conditions from combinations of BNLs and RTs. Listeners provided responses on the NASA Task Load Index (TLX) questionnaire immediately after completing the dual task under each acoustic condition. The NASA TLX surveyed 6 dimensions of perceived listening effort: mental demand, physical demand, temporal demand, effort, frustration, and perceived performance. Fifty-six listeners were tested with speech produced by native American English talkers; the other 59 listeners, with speech from native Mandarin Chinese talkers. Based on their 1st language learned during childhood, 3 groups of listeners were recruited: listeners who were native English speakers, native Mandarin Chinese speakers, and native speakers of other languages (e.g., Hindu, Korean, and Portuguese). Results Listening effort was measured objectively through the APR task performance and subjectively using the NASA TLX questionnaire. Performance on the APR task did not vary with changing acoustic conditions, but it did suggest increased listening effort for native listeners of other languages compared to the 2 other listener groups. From the NASA TLX, listeners reported feeling more frustrated and less successful in understanding Chinese-accented speech. Nonnative listeners reported more listening effort (i.e., physical demand, temporal demand, and effort) than native listeners in speech comprehension under adverse acoustics. When listeners' English proficiency was controlled, higher BNL was strongly related to a decrease in perceived performance, whereas such relationship with RT was much weaker. Nonnative listeners who shared the foreign talkers' accent reported no change in listening effort, whereas other listeners reported more difficulty in understanding the accented speech. Conclusions Adverse acoustics required more effortful listening as measured subjectively with a self-report NASA TLX. This subjective scale was more sensitive than a dual task that involved speech comprehension, which was beyond sentence recall. It was better at capturing the negative impacts on listening effort from acoustic factors (i.e., both BNL and RT), talker accent, and listener language abilities.


Assuntos
Compreensão , Fonética , Esforço Físico/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Idioma , Masculino , Multilinguismo , Ruído , Mascaramento Perceptivo/fisiologia , Adulto Jovem
16.
J Acoust Soc Am ; 139(5): 2772, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27250170

RESUMO

A large number of non-native English speakers may be found in American classrooms, both as listeners and talkers. Little is known about how this population comprehends speech in realistic adverse acoustical conditions. A study was conducted to investigate the effects of background noise level (BNL), reverberation time (RT), and talker foreign accent on native and non-native listeners' speech comprehension, while controlling for English language abilities. A total of 115 adult listeners completed comprehension tasks under 15 acoustic conditions: three BNLs (RC-30, RC-40, and RC-50) and five RTs (from 0.4 to 1.2 s). Fifty-six listeners were tested with speech from native English-speaking talkers and 59 with native Mandarin-Chinese-speaking talkers. Results show that, while higher BNLs were generally more detrimental to listeners with lower English proficiency, all listeners experienced significant comprehension deficits above RC-40 with native English talkers. This limit was lower (i.e., above RC-30), however, with Chinese talkers. For reverberation, non-native listeners as a group performed best with RT up to 0.6 s, while native listeners performed equally well up to 1.2 s. A matched foreign accent benefit has also been identified, where the negative impact of higher reverberation does not exist for non-native listeners who share the talker's native language.


Assuntos
Compreensão , Multilinguismo , Ruído/efeitos adversos , Mascaramento Perceptivo , Fonética , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Audiometria da Fala , Feminino , Humanos , Masculino , Vibração , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...