Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Ear Hear ; 45(1): 130-141, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37599415

RESUMO

OBJECTIVES: Estimated prevalence of functional hearing and communication deficits (FHCDs), characterized by abnormally low speech recognition and binaural tone detection in noise or an abnormally high degree of self-perceived hearing difficulties, dramatically increases in active-duty service members (SMs) who have hearing thresholds slightly above the normal range and self-report to have been close to an explosive blast. Knowing the exact nature of the underlying auditory-processing deficits that contribute to FHCD would not only provide a better characterization of the effects of blast exposure on the human auditory system, but also allow clinicians to prescribe appropriate therapies to treat or manage patient complaints. DESIGN: Two groups of SMs were initially recruited: (1) a control group (N = 78) with auditory thresholds ≤20 dB HL between 250 and 8000 Hz, no history of blast exposure, and who passed a short FHCD screener, and (2) a group of blast-exposed SMs (N = 26) with normal to near-normal auditory thresholds between 250 and 4000 Hz, and who failed the FHCD screener (cutoffs based on the study by Grant et al.). The two groups were then compared on a variety of audiometric, behavioral, cognitive, and electrophysiological measures. These tests were selected to characterize various aspects of auditory system processing from the cochlear to the cortex. A third, smaller group of blast-exposed SMs who performed within normal limits on the FHCD screener were also recruited (N = 11). This third subject group was unplanned at the onset of the study and was added to evaluate the effects of blast exposure on hearing and communication regardless of performance on the FHCD screener. RESULTS: SMs in the blast-exposed group with FHCD performed significantly worse than control participants on several metrics that measured peripheral and mostly subcortical auditory processing. Cognitive processing was mostly unaffected by blast exposure with the exception of cognitive tests of language-processing speed and working memory. Blast-exposed SMs without FHCD performed similarly to the control group on tests of peripheral and brainstem processing, but performed similarly to blast-exposed SMs with FHCD on measures of cognitive processing. Measures derived from EEG recordings of the frequency-following response revealed that blast-exposed SMs who exhibited FHCD demonstrated increased spontaneous neural activity, reduced amplitude of the envelope-following response, poor internal signal to noise ratio, reduced response stability, and an absent or delayed onset response, compared with the other two participant groups. CONCLUSIONS: Degradation in the neural encoding of acoustic stimuli is likely a major contributing factor leading to FHCD in blast-exposed SMs with normal to near-normal audiometric thresholds. Blast-exposed SMs, regardless of their performance on the FHCD screener, exhibited a deficit in language-processing speed and working memory, which could lead to difficulties in decoding rapid speech and in understanding speech in challenging speech communication settings. Further tests are needed to align these findings with clinical treatment protocols being used for patients with suspected auditory-processing disorders.


Assuntos
Perda Auditiva , Percepção da Fala , Humanos , Audição , Percepção Auditiva/fisiologia , Testes Auditivos , Limiar Auditivo
2.
Ear Hear ; 43(1): 206-219, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34320529

RESUMO

OBJECTIVES: For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments. This study examined SSD-CI sound localization in a complex scenario where a target sound was added to or removed from a mixture of other environmental sounds, while tracking head movements to assess behavioral strategy. DESIGN: Eleven CI users with normal hearing or moderate hearing loss in the contralateral ear completed a sound-localization task in monaural (CI-OFF) and bilateral (CI-ON) configurations. Ten of the listeners were also tested before CI activation to examine longitudinal effects. Two-second environmental sound samples, looped to create 4- or 10-sec trials, were presented in a spherical array of 26 loudspeakers encompassing ±144° azimuth and ±30° elevation at a 1-m radius. The target sound was presented alone (localize task) or concurrently with one or three additional sources presented to different loudspeakers, with the target cued by being added to (Add) or removed from (Rem) the mixture after 6 sec. A head-mounted tracker recorded movements in six dimensions (three for location, three for orientation). Mixed-model regression was used to examine target sound-identification accuracy, localization accuracy, and head movement. Angular and translational head movements were analyzed both before and after the target was switched on or off. RESULTS: Listeners showed improved localization accuracy in the CI-ON configuration, but there was no interaction with test condition and no effect of the CI on sound-identification performance. Although high-frequency hearing loss in the unimplanted ear reduced localization accuracy and sound-identification performance, the magnitude of the CI localization benefit was independent of hearing loss. The CI reduced the magnitude of gross head movements used during the task in the azimuthal rotation and translational dimensions, both while the target sound was present (in all conditions) and during the anticipatory period before the target was switched on (in the Add condition). There was no change in pre- versus post-activation CI-OFF performance. CONCLUSIONS: These results extend previous findings, demonstrating a CI localization benefit in a complex listening scenario that includes environmental and behavioral elements encountered in everyday listening conditions. The CI also reduced the magnitude of gross head movements used to perform the task. This was the case even before the target sound was added to the mixture. This suggests that a CI can reduce the need for physical movement both in anticipation of an upcoming sound event and while actively localizing the target sound. Overall, these results show that for SSD listeners, a CI can improve localization in a complex sound environment and reduce the amount of physical movement used.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva , Localização de Som , Percepção da Fala , Surdez/reabilitação , Perda Auditiva/reabilitação , Humanos
3.
Ear Hear ; 42(6): 1615-1626, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34108398

RESUMO

OBJECTIVES: Over the past decade, U.S. Department of Defense and Veterans Affairs audiologists have reported large numbers of relatively young adult patients who have normal to near-normal audiometric thresholds but who report difficulty understanding speech in noisy environments. Many of these service members also reported having experienced exposure to explosive blasts as part of their military service. Recent studies suggest that some blast-exposed patients with normal to near-normal-hearing thresholds not only have an awareness of increased hearing difficulties, but also poor performance on various auditory tasks (sound source localization, speech recognition in noise, binaural integration, gap detection in noise, etc.). The purpose of this study was to determine the prevalence of functional hearing and communication deficits (FHCD) among healthy Active-Duty service men and women with normal to near-normal audiometric thresholds. DESIGN: To estimate the prevalence of such FHCD in the overall military population, performance of roughly 3400 Active-Duty service members with hearing thresholds mostly within the normal range were measured on 4 hearing tests and a brief 6-question survey to assess FHCD. Subjects were subdivided into 6 groups depending on the severity of the blast exposure (3 levels: none, far away, or close enough to feel heat or pressure) and hearing thresholds (2 levels: audiometric thresholds of 20 dB HL or better, slight elevation in 1 or more thresholds between 500 and 4000 Hz in either ear). RESULTS: While the probability of having hearing difficulty was low (≈4.2%) for the overall population tested, that probability increased by 2 to 3 times if the service member was blast-exposed from a close distance or had slightly elevated hearing thresholds (>20 dB HL). Service members having both blast exposure and mildly elevated hearing thresholds exhibited up to 4 times higher risk for performing abnormally on auditory tasks and more than 5 times higher risk for reporting abnormally low ratings on the subjective questionnaire, compared with service members with no history of blast exposure and audiometric thresholds ≤20 dB HL. Blast-exposed listeners were roughly 2.5 times more likely to experience subjective or objective hearing deficits than those with no-blast history. CONCLUSIONS: These elevated rates of abnormal performance suggest that roughly 33.6% of Active-Duty service members (or approximately 423,000) with normal to near-normal-hearing thresholds (i.e., H1 profile) are at some risk for FHCD, and about 5.7% (approximately 72,000) are at high risk, but are currently untested and undetected within the current fitness-for-duty standards. Service members identified as "at risk" for FHCD according to the metrics used in the present study, in spite of their excellent hearing thresholds, require further testing to determine whether they have sustained damage to peripheral and early-stage auditory processing (bottom-up processing), damage to cognitive processes for speech (top-down processing), or both. Understanding the extent of damage due to noise and blast exposures and the balance between bottom-up processing deficits and top-down deficits will likely lead to better therapeutic strategies.


Assuntos
Perda Auditiva , Percepção da Fala , Limiar Auditivo , Feminino , Audição , Testes Auditivos , Humanos , Masculino , Prevalência , Adulto Jovem
4.
Ear Hear ; 40(2): 426-436, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30134353

RESUMO

OBJECTIVE: The clinical evaluation of hearing loss, using a pure-tone audiogram, is not adequate to assess the functional hearing capabilities (or handicap) of a patient, especially the speech-in-noise communication difficulties. The primary objective of this study was to measure the effect of elevated hearing thresholds on the recognition performance in various functional speech-in-noise tests that cover acoustic scenes of different complexities and to identify the subset of tests that (a) were sensitive to individual differences in hearing thresholds and (b) provide complementary information to the audiogram. A secondary goal was to compare the performance on this test battery with the self-assessed performance level of functional hearing abilities. DESIGN: In this study, speech-in-noise performance of normal-hearing listeners and listeners with hearing loss (audiometric configuration ranging from near-normal hearing to moderate-severe hearing loss) was measured on a battery of 12 different tests designed to evaluate speech recognition in a variety of speech and masker conditions, and listening tasks. The listening conditions were designed to measure the ability to localize and monitor multiple speakers or to take advantage of masker modulation, spatial separation between the target and the masker, and a restricted vocabulary. RESULTS: Listeners with hearing loss had significantly worse performance than the normal-hearing control group when speech was presented in the presence of a multitalker babble or in the presence of a single competing talker. In particular, the ability to take advantage of modulation benefit and spatial release from masking was significantly affected even with a mild audiometric loss. Elevated thresholds did not have a significant effect on the performance in the spatial awareness task. A composite score of all 12 tests was considered as a global metric of the overall speech-in-noise performance. Perceived hearing difficulties of subjects were better correlated with the composite score than with the performance on a standardized clinical speech-in-noise test. Regression analysis showed that scores from a subset of these tests, which could potentially take less than 10 min to administer, when combined with the better-ear pure-tone average and the subject's age, accounted for as much as 93.2% of the variance in the composite score. CONCLUSIONS: A test that measures speech recognition in the presence of a spatially separated competing talker would be useful in measuring suprathreshold speech-in-noise deficits that cannot be readily predicted from standard audiometric evaluation. Including such a test can likely reduce the gap between patient complaints and their clinical evaluation.


Assuntos
Atenção , Perda Auditiva/fisiopatologia , Ruído , Comportamento Espacial , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Feminino , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Razão Sinal-Ruído , Teste do Limiar de Recepção da Fala , Adulto Jovem
5.
J Acoust Soc Am ; 146(4): EL381, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671963

RESUMO

Effects of temporal distortions on consonant perception were measured using locally time-reversed nonsense syllables. Consonant recognition was measured in both audio and audio-visual modalities for assessing whether the addition of visual speech cues can recover consonant errors caused by time reversing. The degradation in consonant recognition depended highly on the manner of articulation, with sibilant fricatives, affricates, and nasals showing the least degradation. Because consonant errors induced by time reversing were primarily in voicing and place-of-articulation (mostly limited to stop-plosives and non-sibilant fricatives), undistorted visual speech cues could resolve only about half the errors (i.e., only place-of-articulation errors).


Assuntos
Acústica da Fala , Percepção da Fala , Percepção Visual , Adulto , Audiometria da Fala , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Reconhecimento Psicológico , Processamento de Sinais Assistido por Computador , Espectrografia do Som
6.
Ear Hear ; 39(3): 449-456, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29570117

RESUMO

OBJECTIVE: To evaluate the speech-in-noise performance of listeners with different levels of hearing loss in a variety of complex listening environments. DESIGN: The quick speech-in-noise (QuickSIN)-based test battery was used to measure the speech recognition performance of listeners with different levels of hearing loss. Subjective estimates of speech reception thresholds (SRTs) corresponding to 100% and 0% speech intelligibility, respectively, were obtained using a method of adjustment before objective measurement of the actual SRT corresponding to 50% speech intelligibility in every listening condition. RESULTS: Of the seven alternative listening conditions, two conditions, one involving time-compressed, reverberant speech (TC+Rev), and the other (N0Sπ) having in-phase noise masker (N0) and out-of-phase target (Sπ), were found to be substantially more sensitive to the effect of hearing loss than the standard QuickSIN test. The performance in these two conditions also correlated with self-reported difficulties in attention/concentration during speech communication and in localizing the sound source, respectively. Hearing thresholds could account for about 50% or less variance in SRTs in any listening condition. Subjectively estimated SRTs (SRTs corresponding to 0% and 100% speech intelligibility) were highly correlated with the objective SRT measurements (SRT corresponding to 50% speech intelligibility). CONCLUSIONS: A test battery that includes the TC+Rev and the N0Sπ conditions would be useful in identifying individuals with hearing loss with speech-in-noise deficits in everyday communication.


Assuntos
Perda Auditiva Neurossensorial/diagnóstico , Testes Auditivos/métodos , Percepção da Fala , Estimulação Acústica , Adulto , Audiometria , Limiar Auditivo , Estudos de Avaliação como Assunto , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo , Adulto Jovem
7.
J Acoust Soc Am ; 136(2): 859-66, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25096119

RESUMO

This study compared modulation benefit for phoneme recognition obtained by normal-hearing (NH) and aided hearing-impaired (HI) listeners. Consonant and vowel recognition scores were measured using nonsense syllables in the presence of a steady-state noise and four vocoded speech maskers. Vocoded maskers were generated by modulating the steady-state noise, in either one or six frequency channels, with the speech envelope extracted from the speech of either a single talker or a four-talker babble. Aided HI listeners obtained lower consonant recognition scores than NH listeners in all masker conditions. Vowel recognition scores for aided HI listeners were comparable to NH scores, except in the six-channel vocoded masker conditions where they were relatively lower. Analysis using the extended speech intelligibility index developed by Rhebergen, Versfeld, and Dreschler [(2006). J. Acoust. Soc. Am. 120(6), 3988-3997] suggested that the signal-to-noise ratio deficit observed in aided HI listeners was largely due to uncompensated audibility loss. There was no significant difference between modulation masking release obtained by NH and aided HI listeners for both consonant and vowel recognition.


Assuntos
Auxiliares de Audição , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Fonética , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Idoso , Audiometria da Fala , Limiar Auditivo , Estudos de Casos e Controles , Compreensão , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Espectrografia do Som , Inteligibilidade da Fala
8.
Trends Hear ; 27: 23312165231156673, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36794551

RESUMO

Closed-set consonant identification, measured using nonsense syllables, has been commonly used to investigate the encoding of speech cues in the human auditory system. Such tasks also evaluate the robustness of speech cues to masking from background noise and their impact on auditory-visual speech integration. However, extending the results of these studies to everyday speech communication has been a major challenge due to acoustic, phonological, lexical, contextual, and visual speech cue differences between consonants in isolated syllables and in conversational speech. In an attempt to isolate and address some of these differences, recognition of consonants spoken in multisyllabic nonsense phrases (e.g., aBaSHaGa spoken as /ɑbɑʃɑɡɑ/) produced at an approximately conversational syllabic rate was measured and compared with consonant recognition using Vowel-Consonant-Vowel bisyllables spoken in isolation. After accounting for differences in stimulus audibility using the Speech Intelligibility Index, consonants spoken in sequence at a conversational syllabic rate were found to be more difficult to recognize than those produced in isolated bisyllables. Specifically, place- and manner-of-articulation information was transmitted better in isolated nonsense syllables than for multisyllabic phrases. The contribution of visual speech cues to place-of-articulation information was also lower for consonants spoken in sequence at a conversational syllabic rate. These data imply that auditory-visual benefit based on models of feature complementarity from isolated syllable productions may over-estimate real-world benefit of integrating auditory and visual speech cues.


Assuntos
Percepção da Fala , Humanos , Ruído , Inteligibilidade da Fala , Acústica , Sinais (Psicologia) , Fonética
9.
J Acoust Soc Am ; 132(3): 1646-54, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22978893

RESUMO

This study measured the influence of masker fluctuations on phoneme recognition. The first part of the study compared the benefit of masker modulations for consonant and vowel recognition in normal-hearing (NH) listeners. Recognition scores were measured in steady-state and sinusoidally amplitude-modulated noise maskers (100% modulation depth) at several modulation rates and signal-to-noise ratios. Masker modulation rates were 4, 8, 16, and 32 Hz for the consonant recognition task and 2, 4, 12, and 32 Hz for the vowel recognition task. Vowel recognition scores showed more modulation benefit and a more pronounced effect of masker modulation rate than consonant scores. The modulation benefit for word recognition from other studies was found to be more similar to the benefit for vowel recognition than that for consonant recognition. The second part of the study measured the effect of modulation rate on the benefit of masker modulations for vowel recognition in aided hearing-impaired (HI) listeners. HI listeners achieved as much modulation benefit as NH listeners for slower masker modulation rates (2, 4, and 12 Hz), but showed a reduced benefit for the fast masker modulation rate of 32 Hz.


Assuntos
Correção de Deficiência Auditiva , Auxiliares de Audição , Perda Auditiva Bilateral/reabilitação , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/reabilitação , Fonética , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria da Fala , Limiar Auditivo , Estudos de Casos e Controles , Correção de Deficiência Auditiva/psicologia , Feminino , Perda Auditiva Bilateral/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Pessoas com Deficiência Auditiva/psicologia , Espectrografia do Som , Fatores de Tempo
10.
Otol Neurotol ; 43(6): 666-675, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35761459

RESUMO

HYPOTHESIS: Bilateral cochlear-implant (BI-CI) users will have a range of interaural insertion-depth mismatch because of different array placement or characteristics. Mismatch will be larger for electrodes located near the apex or outside scala tympani, or for arrays that are a mix of precurved and straight types. BACKGROUND: Brainstem superior olivary-complex neurons are exquisitely sensitive to interaural-difference cues for sound localization. Because these neurons rely on interaurally place-of-stimulation-matched inputs, interaural insertion-depth or scalar-location differences for BI-CI users could cause interaural place-of-stimulation mismatch that impairs binaural abilities. METHODS: Insertion depths and scalar locations were calculated from temporal-bone computed-tomography scans for 107 BI-CI users (27 Advanced Bionics, 62 Cochlear, 18 MED-EL). RESULTS: Median interaural insertion-depth mismatch was 23.4 degrees or 1.3 mm. Mismatch in the estimated clinically relevant range expected to impair binaural processing (>75 degrees or 3 mm) occurred for 13 to 19% of electrode pairs overall, and for at least three electrode pairs for 23 to 37% of subjects. There was a significant three-way interaction between insertion depth, scalar location, and array type. Interaural insertion-depth mismatch was largest for apical electrodes, for electrode pairs in two different scala, and for arrays that were both-precurved. CONCLUSION: Average BI-CI interaural insertion-depth mismatch was small; however, large interaural insertion-depth mismatch-with the potential to degrade spatial hearing-occurred frequently enough to warrant attention. For new BICI users, improved surgical techniques to avoid interaural insertion-depth and scalar mismatch are recommended. For existing BI-CI users with interaural insertion-depth mismatch, interaural alignment of clinical frequency tables might reduce negative spatial-hearing consequences.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Implante Coclear/métodos , Humanos , Rampa do Tímpano , Localização de Som/fisiologia , Tomografia
11.
J Acoust Soc Am ; 126(5): 2683-94, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19894845

RESUMO

This paper presents a compact graphical method for comparing the performance of individual hearing impaired (HI) listeners with that of an average normal hearing (NH) listener on a consonant-by-consonant basis. This representation, named the consonant loss profile (CLP), characterizes the effect of a listener's hearing loss on each consonant over a range of performance. The CLP shows that the consonant loss, which is the signal-to-noise ratio (SNR) difference at equal NH and HI scores, is consonant-dependent and varies with the score. This variation in the consonant loss reveals that hearing loss renders some consonants unintelligible, while it reduces noise-robustness of some other consonants. The conventional SNR-loss metric DeltaSNR(50), defined as the SNR difference at 50% recognition score, is insufficient to capture this variation. The DeltaSNR(50) value is on average 12 dB lower when measured with sentences using standard clinical procedures than when measured with nonsense syllables. A listener with symmetric hearing loss may not have identical CLPs for both ears. Some consonant confusions by HI listeners are influenced by the high-frequency hearing loss even at a presentation level as high as 85 dB sound pressure level.


Assuntos
Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/fisiopatologia , Fonética , Inteligibilidade da Fala/fisiologia , Teste do Limiar de Recepção da Fala , Humanos , Ruído
12.
J Acoust Soc Am ; 124(2): 1220-33, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18681609

RESUMO

The classic [MN55] confusion matrix experiment (16 consonants, white noise masker) was repeated by using computerized procedures, similar to those of Phatak and Allen (2007). ["Consonant and vowel confusions in speech-weighted noise," J. Acoust. Soc. Am. 121, 2312-2316]. The consonant scores in white noise can be categorized in three sets: low-error set [/m/, /n/], average-error set [/p/, /t/, /k/, /s/, /[please see text]/, /d/, /g/, /z/, /Z/], and high-error set /f/theta/b/, /v/, /E/,/theta/]. The consonant confusions match those from MN55, except for the highly asymmetric voicing confusions of fricatives, biased in favor of voiced consonants. Masking noise cannot only reduce the recognition of a consonant, but also perceptually morph it into another consonant. There is a significant and systematic variability in the scores and confusion patterns of different utterances of the same consonant, which can be characterized as (a) confusion heterogeneity, where the competitors in the confusion groups of a consonant vary, and (b) threshold variability, where confusion threshold [i.e., signal-to-noise ratio (SNR) and score at which the confusion group is formed] varies. The average consonant error and errors for most of the individual consonants and consonant sets can be approximated as exponential functions of the articulation index (AI). An AI that is based on the peak-to-rms ratios of speech can explain the SNR differences across experiments.


Assuntos
Mascaramento Perceptivo , Fonética , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Feminino , Humanos , Masculino , Modelos Biológicos
13.
J Acoust Soc Am ; 121(4): 2312-26, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17471744

RESUMO

This paper presents the results of a closed-set recognition task for 64 consonant-vowel sounds (16 C X 4 V, spoken by 18 talkers) in speech-weighted noise (-22,-20,-16,-10,-2 [dB]) and in quiet. The confusion matrices were generated using responses of a homogeneous set of ten listeners and the confusions were analyzed using a graphical method. In speech-weighted noise the consonants separate into three sets: a low-scoring set C1 (/f/, /theta/, /v/, /d/, /b/, /m/), a high-scoring set C2 (/t/, /s/, /z/, /S/, /Z/) and set C3 (/n/, /p/, /g/, /k/, /d/) with intermediate scores. The perceptual consonant groups are C1: {/f/-/theta/, /b/-/v/-/d/, /theta/-/d/}, C2: {/s/-/z/, /S/-/Z/}, and C3: /m/-/n/, while the perceptual vowel groups are /a/-/ae/ and /epsilon/-/iota/. The exponential articulation index (AI) model for consonant score works for 12 of the 16 consonants, using a refined expression of the AI. Finally, a comparison with past work shows that white noise masks the consonants more uniformly than speech-weighted noise, and shows that the AI, because it can account for the differences in noise spectra, is a better measure than the wideband signal-to-noise ratio for modeling and comparing the scores with different noise maskers.


Assuntos
Modelos Biológicos , Ruído , Fonética , Percepção da Fala , Adulto , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa