Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Ear Hear ; 39(1): 101-109, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-28700448

RESUMEN

OBJECTIVES: The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. DESIGN: Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. RESULTS: Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. CONCLUSIONS: Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.


Asunto(s)
Implantes Cocleares , Percepción del Habla , Estimulación Acústica , Adolescente , Adulto , Factores de Edad , Anciano , Umbral Auditivo , Sordera/fisiopatología , Sordera/rehabilitación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Semántica , Adulto Joven
2.
J Acoust Soc Am ; 141(1): 373, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-28147573

RESUMEN

English listeners use suprasegmental cues to lexical stress during spoken-word recognition. Prosodic cues are, however, less salient in spectrally degraded speech, as provided by cochlear implants. The present study examined how spectral degradation with and without low-frequency fine-structure information affects normal-hearing listeners' ability to benefit from suprasegmental cues to lexical stress in online spoken-word recognition. To simulate electric hearing, an eight-channel vocoder spectrally degraded the stimuli while preserving temporal envelope information. Additional lowpass-filtered speech was presented to the opposite ear to simulate bimodal hearing. Using a visual world paradigm, listeners' eye fixations to four printed words (target, competitor, two distractors) were tracked, while hearing a word. The target and competitor overlapped segmentally in their first two syllables but mismatched suprasegmentally in their first syllables, as the initial syllable received primary stress in one word and secondary stress in the other (e.g., "'admiral," "'admi'ration"). In the vocoder-only condition, listeners were unable to use lexical stress to recognize targets before segmental information disambiguated them from competitors. With additional lowpass-filtered speech, however, listeners efficiently processed prosodic information to speed up online word recognition. Low-frequency fine-structure cues in simulated bimodal hearing allowed listeners to benefit from suprasegmental cues to lexical stress during word recognition.


Asunto(s)
Señales (Psicología) , Reconocimiento en Psicología , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Femenino , Humanos , Masculino , Estimulación Luminosa , Factores de Tiempo , Percepción Visual , Adulto Joven
3.
Ear Hear ; 37(5): 582-92, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27007220

RESUMEN

OBJECTIVES: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. DESIGN: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. RESULTS: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. CONCLUSIONS: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners' ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.


Asunto(s)
Implantes Cocleares , Señales (Psicología) , Sordera/rehabilitación , Percepción del Habla , Adolescente , Adulto , Implantación Coclear , Simulación por Computador , Femenino , Voluntarios Sanos , Humanos , Masculino , Adulto Joven
4.
J Acoust Soc Am ; 139(4): 1747, 2016 04.
Artículo en Inglés | MEDLINE | ID: mdl-27106322

RESUMEN

Low-frequency acoustic cues have been shown to enhance speech perception by cochlear-implant users, particularly when target speech occurs in a competing background. The present study examined the extent to which a continuous representation of low-frequency harmonicity cues contributes to bimodal benefit in simulated bimodal listeners. Experiment 1 examined the benefit of restoring a continuous temporal envelope to the low-frequency ear while the vocoder ear received a temporally interrupted stimulus. Experiment 2 examined the effect of providing continuous harmonicity cues in the low-frequency ear as compared to restoring a continuous temporal envelope in the vocoder ear. Findings indicate that bimodal benefit for temporally interrupted speech increases when continuity is restored to either or both ears. The primary benefit appears to stem from the continuous temporal envelope in the low-frequency region providing additional phonetic cues related to manner and F1 frequency; a secondary contribution is provided by low-frequency harmonicity cues when a continuous representation of the temporal envelope is present in the low-frequency, or both ears. The continuous temporal envelope and harmonicity cues of low-frequency speech are thought to support bimodal benefit by facilitating identification of word and syllable boundaries, and by restoring partial phonetic cues that occur during gaps in the temporally interrupted stimulus.


Asunto(s)
Implantación Coclear , Señales (Psicología) , Periodicidad , Personas con Deficiencia Auditiva/rehabilitación , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Acústica , Adolescente , Adulto , Audiometría del Habla , Implantación Coclear/instrumentación , Implantes Cocleares , Estimulación Eléctrica , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Fonética , Espectrografía del Sonido , Inteligibilidad del Habla , Factores de Tiempo , Adulto Joven
5.
J Acoust Soc Am ; 140(5): 3971, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27908030

RESUMEN

In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSECI), but how listeners utilize these informational changes to understand EAS speech is unclear. Here, normal-hearing participants heard noise-vocoded sentences with three to six spectral channels in two conditions: vocoder-only (80-8000 Hz) and simulated hybrid EAS (vocoded above 500 Hz; original acoustic signal below 500 Hz). In each sentence, four 80-ms intervals containing high-CSECI or low-CSECI acoustic changes were replaced with speech-shaped noise. As expected, performance improved with the preservation of low-frequency fine-structure cues (EAS). This improvement decreased for continuous EAS sentences as more spectral channels were added, but increased as more channels were added to noise-interrupted EAS sentences. Performance was impaired more when high-CSECI intervals were replaced by noise than when low-CSECI intervals were replaced, but this pattern did not differ across listening modes. Utilizing information-bearing acoustic changes to understand speech is predicted to generalize to cochlear implant users who receive EAS inputs.


Asunto(s)
Ruido , Estimulación Acústica , Implantes Cocleares , Enmascaramiento Perceptual , Inteligibilidad del Habla , Percepción del Habla
6.
J Acoust Soc Am ; 137(5): 2846-57, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25994712

RESUMEN

Low-frequency acoustic cues have shown to improve speech perception in cochlear-implant listeners. However, the mechanisms underlying this benefit are still not well understood. This study investigated the extent to which low-frequency cues can facilitate listeners' use of linguistic knowledge in simulated electric-acoustic stimulation (EAS). Experiment 1 examined differences in the magnitude of EAS benefit at the phoneme, word, and sentence levels. Speech materials were processed via noise-channel vocoding and lowpass (LP) filtering. The amount of spectral degradation in the vocoder speech was varied by applying different numbers of vocoder channels. Normal-hearing listeners were tested on vocoder-alone, LP-alone, and vocoder + LP conditions. Experiment 2 further examined factors that underlie the context effect on EAS benefit at the sentence level by limiting the low-frequency cues to temporal envelope and periodicity (AM + FM). Results showed that EAS benefit was greater for higher-context than for lower-context speech materials even when the LP ear received only low-frequency AM + FM cues. Possible explanations for the greater EAS benefit observed with higher-context materials may lie in the interplay between perceptual and expectation-driven processes for EAS speech recognition, and/or the band-importance functions for different types of speech materials.


Asunto(s)
Estimulación Acústica/métodos , Acústica , Señales (Psicología) , Reconocimiento en Psicología , Percepción del Habla , Adolescente , Adulto , Audiometría del Habla , Simulación por Computador , Humanos , Periodicidad , Fonética , Espectrografía del Sonido , Acústica del Lenguaje , Factores de Tiempo , Calidad de la Voz , Adulto Joven
7.
Speech Commun ; 67: 102-112, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26150679

RESUMEN

Periodicity is an important property of speech signals. It is the basis of the signal's fundamental frequency and the pitch of voice, which is crucial to speech communication. This paper presents a novel framework of periodicity enhancement for noisy speech. The enhancement is applied to the linear prediction residual of speech. The residual signal goes through a constant-pitch time warping process and two sequential lapped-frequency transforms, by which the periodic component is concentrated in certain transform coefficients. By emphasizing the respective transform coefficients, periodicity enhancement of noisy residual signal is achieved. The enhanced residual signal and estimated linear prediction filter parameters are used to synthesize the output speech. An adaptive algorithm is proposed for adjusting the weights for the periodic and aperiodic components. Effectiveness of the proposed approach is demonstrated via experimental evaluation. It is observed that harmonic structure of the original speech could be properly restored to improve the perceptual quality of enhanced speech.

8.
Int J Audiol ; 53(8): 546-57, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24694089

RESUMEN

OBJECTIVES: This study investigates the efficacy of a cochlear implant (CI) processing method that enhances temporal periodicity cues of speech. DESIGN: Subjects participated in word and tone identification tasks. Two processing conditions - the conventional advanced combination encoder (ACE) and tone-enhanced ACE were tested. Test materials were Cantonese disyllabic words recorded from one male and one female speaker. Speech-shaped noise was added to clean speech. The fundamental frequency information for periodicity enhancement was extracted from the clean speech. Electrical stimuli generated from the noisy speech with and without periodicity enhancement were presented via direct stimulation using a Laura 34 research processor. Subjects were asked to identify the presented word. STUDY SAMPLE: Seven post-lingually deafened native Cantonese-speaking CI users. RESULTS: Percent correct word, segmental structure, and tone identification scores were calculated. While word and segmental structure identification accuracy remained similar between the two processing conditions, tone identification in noise was better with tone-enhanced ACE than with conventional ACE. Significant improvement on tone perception was found only for the female voice. CONCLUSIONS: Temporal periodicity cues are important to tone perception in noise. Pitch and tone perception by CI users could be improved when listeners received enhanced temporal periodicity cues.


Asunto(s)
Implantes Cocleares , Percepción del Habla , Adulto , Anciano , China , Femenino , Humanos , Lenguaje , Masculino , Persona de Mediana Edad
9.
Augment Altern Commun ; 30(4): 298-313, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25384797

RESUMEN

Graphic symbols are a necessity for pre-literate children who use aided augmentative and alternative communication (AAC) systems (including non-electronic communication boards and speech generating devices), as well as for mobile technologies using AAC applications. Recently, developers of the Autism Language Program (ALP) Animated Graphics Set have added environmental sounds to animated symbols representing verbs in an attempt to enhance their iconicity. The purpose of this study was to examine the effects of environmental sounds (added to animated graphic symbols representing verbs) in terms of naming. Participants included 46 children with typical development between the ages of 3;0 to 3;11 (years;months). The participants were randomly allocated to a condition of symbols with environmental sounds or a condition without environmental sounds. Results indicated that environmental sounds significantly enhanced the naming accuracy of animated symbols for verbs. Implications in terms of symbol selection, symbol refinement, and future symbol development will be discussed.


Asunto(s)
Estimulación Acústica/métodos , Reconocimiento Visual de Modelos , Estimulación Luminosa/métodos , Vocabulario , Preescolar , Equipos de Comunicación para Personas con Discapacidad , Femenino , Humanos , Masculino , Patrones de Reconocimiento Fisiológico
10.
Ear Hear ; 34(3): 300-12, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23165224

RESUMEN

OBJECTIVES: This study describes a vocoder-based frequency-lowering system that enhances spectral cues for nonsonorant consonants differing in place of articulation. The goal of this study was to evaluate the efficacy of this system for speech recognition by hearing-impaired listeners. DESIGN: Experiment 1 evaluated fricative consonant recognition in quiet. Eight fricatives in /VCV/ context were used. Experiment 2 evaluated consonant recognition in quiet with 22 consonants. Six listeners with steeply sloping high-frequency sensorineural hearing loss participated in experiment 1. The same six listeners and three additional listeners with flat/mid-frequency sensorineural hearing loss participated in experiment 2. Two processing conditions-frequency lowering and conventional amplification-were tested in each experiment. Insertion gains based on the NAL-RP formula were provided up to 8000 Hz for each processing condition. In addition, speech stimuli were low-pass (LP) filtered at 1000, 1500, and 2000 Hz to evaluate the effect of lack of high-frequency speech information on consonant perception with and without frequency lowering. For these LP speech conditions, amplification was provided up to the cutoff frequencies. Overall percent correct and percent information transmission were calculated for each processing and speech condition. RESULTS: The frequency-lowering system provided significant benefit for the perception of fricative consonants and perception of the place-of-articulation feature for hearing-impaired listeners without affecting their perception of sonorant consonants and other consonant features (i.e., voicing and nasality). The improvement of fricative consonant perception was observed for both wideband and LP speech conditions for the steeply sloping hearing-loss listeners. CONCLUSIONS: The results indicate that individuals with unaidable hearing loss above 1000 to 2000 Hz would receive significant benefit with the system compared with conventional amplification for the perception of fricative consonants, and more importantly, significant benefit for the perception of place of articulation.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/terapia , Fonética , Acústica del Lenguaje , Percepción del Habla/fisiología , Adolescente , Adulto , Audiometría del Habla/métodos , Umbral Auditivo/fisiología , Diseño de Equipo , Femenino , Pérdida Auditiva Sensorineural/fisiopatología , Humanos , Masculino , Persona de Mediana Edad
11.
Ear Hear ; 33(5): 645-59, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22677814

RESUMEN

OBJECTIVES: This study was designed to evaluate the contribution of temporal and spectral cues for timbre perception in listeners with a cochlear implant (CI) in one ear and low-frequency residual hearing in the contralateral ear (bimodal hearing), and listeners with two CIs. Specifically, it examined the relationship between timbre and speech perception in these two groups of listeners. It was hypothesized that, similar to speech recognition, temporal-envelope cues are dominant cues for timbre perception, and the reliance of spectral cues was reduced in both bimodal and bilateral CI users compared with that in normal-hearing listeners. It was further hypothesized that the patterns of results with regard to combined benefit would be similar between timbre and speech perception. DESIGN: Seven bimodal and five bilateral CI users participated. Sixteen stimuli that synthesized western musical instruments were used for the timbre-perception task. Sixteen consonants in the /aCa/ context and nine monophthongs in the /hVd/ context were used for the phoneme-recognition task. Each subject was tested on three listening conditions-individual device alone (single CI, or hearing aid [HA] alone) and combined use of devices (CI + HA, or 2CIs). For the timbre-perception task, each listener made judgments of dissimilarity between stimulus pairs. Multidimensional scaling analysis was performed to derive the coordinates of the dimensions that best fit the data. Correlational analyses were performed to relate the coordinates of each dimension and the temporal-envelope (impulsiveness) and spectral-envelope (spectral-centroid) features of the stimuli. For phoneme-recognition task, each listener identified the phoneme he or she heard by choosing an answer displayed on the computer screen. Overall percent correct phoneme-identification scores and percent information transmission for consonant and vowel features were calculated. RESULTS: There were strong correlations between impulsiveness and the first dimension (Dim 1) of the timbre space, but correlations between spectral centroid and the second dimension (Dim 2) were weak for all listening conditions for both groups of listeners. As a group, the combined use of devices did not significantly improve listeners' ability to perceive differences in musical timbre compared with the better single-device condition. Some of the bimodal and bilateral CI users showed a considerably strengthened correlation between spectral centroid and Dim 2 in the combined condition compared with a single CI or an HA. There was a lack of relationship between percent correct phoneme recognition and timbre perception for all listening conditions. However, there was a consistent pattern regarding the combined benefit between timbre perception and vowel recognition. In general, listeners who demonstrated combined benefit for vowel recognition also showed a considerable increase in correlation between spectral centroid and Dim 2 with the combined use of devices compared with the single-device conditions. Improved correlation was not evident for those who did not demonstrate combined benefit for vowel recognition. CONCLUSIONS: Similar to speech recognition, temporal envelope was a dominant cue for timbre perception in bimodal and bilateral CI users. In addition, there was a close relationship between timbre perception and vowel recognition with regard to combined benefit. The present findings suggest that speech recognition and timbre perception could be enhanced when listeners received different spectral cues from individual devices.


Asunto(s)
Percepción Auditiva , Implantación Coclear/métodos , Percepción del Habla , Estimulación Acústica , Adolescente , Adulto , Anciano , Implantes Cocleares , Señales (Psicología) , Femenino , Pruebas Auditivas , Humanos , Masculino , Persona de Mediana Edad , Música , Factores de Tiempo
12.
Speech Commun ; 54(1): 147-160, 2012 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-21927522

RESUMEN

Frequency lowering is a form of signal processing designed to deliver high-frequency speech cues to the residual hearing region of a listener with a high-frequency hearing loss. While this processing technique has been shown to improve the intelligibility of fricative and affricate consonants, perception of place of articulation has remained a challenge for hearing-impaired listeners, especially when the bandwidth of the speech signal is reduced during the frequency-lowering processing. This paper describes a modified vocoder-based frequency-lowering system similar to one reported by Posen, Reed, and Braida (1993), with the goal of improving place-of-articulation perception by enhancing the spectral differences of fricative consonants. In this system, frequency lowering is conditional; it suppresses the processing whenever the high-frequency portion (>400 Hz) of the speech signal is a periodic signal. In addition, the system separates non-sonorant consonants into three classes based on the spectral information (slope and peak location) of fricative consonants. Results from a group of normal-hearing listeners with our modified system show improved perception of frication and affrication features, as well as place-of-articulation distinction, without degrading the perception of nasals and semivowels compared to low-pass filtering and Posen et al.'s system.

13.
J Acoust Soc Am ; 127(5): 3114-23, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-21117760

RESUMEN

A recent study reported that a group of Med-El COMBI 40+CI (cochlear implant) users could, in a forced-choice task, detect changes in the rate of a pulse train for rates higher than the 300 pps "upper limit" commonly reported in the literature [Kong, Y.-Y., et al. (2009). J. Acoust. Soc. Am. 125, 1649-1657]. The present study further investigated the upper limit of temporal pitch in the same group of CI users on three tasks [pitch ranking, rate discrimination, and multidimensional scaling (MDS)]. The patterns of results were consistent across the three tasks and all subjects could follow rate changes above 300 pps. Two subjects showed exceptional ability to follow temporal pitch change up to about 900 pps. Results from the MDS study indicated that, for the two listeners tested, changes in pulse rate over the range of 500-840 pps were perceived along a perceptual dimension that was orthogonal to the place of excitation. Some subjects showed a temporal pitch reversal at rates beyond their upper limit of pitch and some showed a reversal within a small range of rates below the upper limit. These results are discussed in relation to the possible neural bases for temporal pitch processing at high rates.


Asunto(s)
Implantación Coclear/instrumentación , Implantes Cocleares , Corrección de Deficiencia Auditiva/psicología , Señales (Psicología) , Personas con Deficiencia Auditiva/rehabilitación , Percepción de la Altura Tonal , Percepción del Tiempo , Estimulación Acústica , Adulto , Anciano , Audiometría , Conducta de Elección , Femenino , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva/psicología , Psicoacústica , Detección de Señal Psicológica , Factores de Tiempo
14.
Ear Hear ; 30(2): 160-8, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19194298

RESUMEN

OBJECTIVE: The primary goal of the present study was to determine how cochlear implant melody recognition was affected by the frequency range of the melodies, the harmonicity of these melodies, and the number of activated electrodes. The secondary goal was to investigate whether melody recognition and speech recognition were differentially affected by the limitations imposed by cochlear implant processing. DESIGN: Four experiments were conducted. In the first experiment, 11 cochlear implant users used their clinical processors to recognize melodies of complex harmonic tones with their fundamental frequencies being in the low (104-262 Hz), middle (207-523 Hz), and high (414-1046 Hz) ranges. In the second experiment, melody recognition with pure tones was compared to melody recognition with complex harmonic tones in four subjects. In the third experiment, melody recognition was measured as a function of the number of electrodes in five subjects. In the fourth experiment, vowel and consonant recognition were measured as a function of the number of electrodes in the same five subjects who participated in the third experiment. RESULTS: Frequency range significantly affected cochlear implant melody recognition, with higher frequency ranges producing better performance. Pure tones produced significantly better performance than complex harmonic tones. Increasing the number of activated electrodes did not affect performance with low- and middle-frequency melodies but produced better performance with high-frequency melodies. Large individual variability was observed for melody recognition, but its source seemed to be different from the source of the large variability observed in speech recognition. CONCLUSION: Contemporary cochlear implants do not adequately encode either temporal pitch or place pitch cues. Melody recognition and speech recognition require different signal processing strategies in future cochlear implants.


Asunto(s)
Implantes Cocleares , Música , Patrones de Reconocimiento Fisiológico , Percepción de la Altura Tonal , Percepción del Habla , Estimulación Acústica/métodos , Adulto , Anciano , Algoritmos , Audiometría de Tonos Puros , Electrodos Implantados , Humanos , Persona de Mediana Edad , Procesamiento de Señales Asistido por Computador
15.
J Acoust Soc Am ; 125(3): 1649-57, 2009 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19275322

RESUMEN

A common finding in the cochlear implant literature is that the upper limit of rate discrimination on a single channel is about 300 pps. The present study investigated rate discrimination using a procedure in which, in each block of two-interval trials, the standard could have one of the five baseline rates (100, 200, 300, 400, and 500 pps) and the signal rate was a given percentage higher than the standard. Eight Med-El C40+ subjects took part. The pattern of results was different than those reported previously: six Med-El subjects performed better at medium rates (200-300 pps) compared to both lower (100 pps) and higher (400-500 pps) rates. A similar pattern of results was obtained both with the method of constant stimuli and for 5000-pps pulse trains amplitude modulated at rates between 100 and 500 Hz. Compared to an unmatched group of eight Nucleus CI24 listeners tested using a similar paradigm and stimuli, Med-El subjects performed significantly better at 300 pps and higher but slightly worse at 100 pps. These results are discussed in relation to evidence on the limits of temporal pitch at low and high rates in normal-hearing listeners.


Asunto(s)
Implantes Cocleares , Percepción de la Altura Tonal , Percepción del Tiempo , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad
16.
J Acoust Soc Am ; 121(6): 3717-27, 2007 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-17552722

RESUMEN

Speech recognition in noise improves with combined acoustic and electric stimulation compared to electric stimulation alone [Kong et al., J. Acoust. Soc. Am. 117, 1351-1361 (2005)]. Here the contribution of fundamental frequency (F0) and low-frequency phonetic cues to speech recognition in combined hearing was investigated. Normal-hearing listeners heard vocoded speech in one ear and low-pass (LP) filtered speech in the other. Three listening conditions (vocode-alone, LP-alone, combined) were investigated. Target speech (average F0=120 Hz) was mixed with a time-reversed masker (average F0=172 Hz) at three signal-to-noise ratios (SNRs). LP speech aided performance at all SNRs. Low-frequency phonetic cues were then removed by replacing the LP speech with a LP equal-amplitude harmonic complex, frequency and amplitude modulated by the F0 and temporal envelope of voiced segments of the target. The combined hearing advantage disappeared at 10 and 15 dB SNR, but persisted at 5 dB SNR. A similar finding occurred when, additionally, F0 contour cues were removed. These results are consistent with a role for low-frequency phonetic cues, but not with a combination of F0 information between the two ears. The enhanced performance at 5 dB SNR with F0 contour cues absent suggests that voicing or glimpsing cues may be responsible for the combined hearing benefit.


Asunto(s)
Audición/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Reconocimiento en Psicología/fisiología , Habla , Estimulación Acústica , Señales (Psicología) , Estimulación Eléctrica , Lateralidad Funcional , Humanos , Fonación/fisiología , Fonética
17.
J Speech Lang Hear Res ; 60(1): 190-198, 2017 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28056135

RESUMEN

Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., "Click on the word admiral"). Displays contained a critical pair of words (e.g., 'admiral-'admi'ration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results: Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions: Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress.


Asunto(s)
Fonética , Percepción del Habla , Medidas del Movimiento Ocular , Fijación Ocular , Humanos , Patrones de Reconocimiento Fisiológico , Lectura , Reconocimiento en Psicología , Adulto Joven
18.
Trends Hear ; 202016 06 17.
Artículo en Inglés | MEDLINE | ID: mdl-27317666

RESUMEN

Multiple redundant acoustic cues can contribute to the perception of a single phonemic contrast. This study investigated the effect of spectral degradation on the discriminability and perceptual saliency of acoustic cues for identification of word-final fricative voicing in "loss" versus "laws", and possible changes that occurred when low-frequency acoustic cues were restored. Three acoustic cues that contribute to the word-final /s/-/z/ contrast (first formant frequency [F1] offset, vowel-consonant duration ratio, and consonant voicing duration) were systematically varied in synthesized words. A discrimination task measured listeners' ability to discriminate differences among stimuli within a single cue dimension. A categorization task examined the extent to which listeners make use of a given cue to label a syllable as "loss" versus "laws" when multiple cues are available. Normal-hearing listeners were presented with stimuli that were either unprocessed, processed with an eight-channel noise-band vocoder to approximate spectral degradation in cochlear implants, or low-pass filtered. Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, and a combined vocoder + low-pass condition that simulated bimodal hearing. Results showed a negative impact of spectral degradation on F1 cue discrimination and a trading relation between spectral and temporal cues in which listeners relied more heavily on the temporal cues for "loss-laws" identification when spectral cues were degraded. Furthermore, the addition of low-frequency fine-structure cues in simulated bimodal hearing increased the perceptual saliency of the F1 cue for "loss-laws" identification compared with vocoded speech. Findings suggest an interplay between the quality of sensory input and cue importance.


Asunto(s)
Estimulación Acústica , Implantes Cocleares , Percepción del Habla , Implantación Coclear , Señales (Psicología) , Humanos , Fonética
19.
Clin Neurophysiol ; 116(3): 669-80, 2005 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-15721081

RESUMEN

OBJECTIVE: To study objectively auditory temporal processing in a group of normal hearing subjects and in a group of hearing-impaired individuals with auditory neuropathy (AN) using electrophysiological and psychoacoustic methods. METHODS: Scalp recorded evoked potentials were measured to brief silent intervals (gaps) varying between 2 and 50ms embedded in continuous noise. Latencies and amplitudes of N100 and P200 were measured and analyzed in two conditions: (1) active, when using a button in response to gaps; (2) passive, listening, but not responding. RESULTS: In normal subjects evoked potentials (N100/P200 components) were recorded in response to gaps as short as 5ms in both active and passive conditions. Gap evoked potentials in AN subjects appeared only with prolonged gap durations (10-50ms). There was a close association between gap detection thresholds measured psychoacoustically and electrophysiologically in both normals and in AN subjects. CONCLUSIONS: Auditory cortical potentials can provide objective measures of auditory temporal processes. SIGNIFICANCE: The combination of electrophysiological and psychoacoustic methods converged to provide useful objective measures for studying auditory cortical temporal processing in normals and hearing-impaired individuals. The procedure used may also provide objective measures of temporal processing for evaluating special populations such as children who may not be able to provide subjective responses.


Asunto(s)
Corteza Auditiva/fisiopatología , Enfermedades Auditivas Centrales/fisiopatología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Análisis de Varianza , Umbral Auditivo/fisiología , Niño , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva , Psicoacústica , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Factores Sexuales , Factores de Tiempo
20.
J Assoc Res Otolaryngol ; 16(6): 783-96, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26362546

RESUMEN

This study investigates the effect of spectral degradation on cortical speech encoding in complex auditory scenes. Young normal-hearing listeners were simultaneously presented with two speech streams and were instructed to attend to only one of them. The speech mixtures were subjected to noise-channel vocoding to preserve the temporal envelope and degrade the spectral information of speech. Each subject was tested with five spectral resolution conditions (unprocessed speech, 64-, 32-, 16-, and 8-channel vocoder conditions) and two target-to-masker ratio (TMR) conditions (3 and 0 dB). Ongoing electroencephalographic (EEG) responses and speech comprehension were measured in each spectral and TMR condition for each subject. Neural tracking of each speech stream was characterized by cross-correlating the EEG responses with the envelope of each of the simultaneous speech streams at different time lags. Results showed that spectral degradation and TMR both significantly influenced how top-down attention modulated the EEG responses to the attended and unattended speech. That is, the EEG responses to the attended and unattended speech streams differed more for the higher (unprocessed, 64 ch, and 32 ch) than the lower (16 and 8 ch) spectral resolution conditions, as well as for the higher (3 dB) than the lower TMR (0 dB) condition. The magnitude of differential neural modulation responses to the attended and unattended speech streams significantly correlated with speech comprehension scores. These results suggest that severe spectral degradation and low TMR hinder speech stream segregation, making it difficult to employ top-down attention to differentially process different speech streams.


Asunto(s)
Corteza Auditiva/fisiología , Acústica del Lenguaje , Percepción del Habla/fisiología , Adulto , Atención/fisiología , Comprensión , Femenino , Voluntarios Sanos , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA