Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters











Publication year range
1.
bioRxiv ; 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38979194

ABSTRACT

Objectives: To provide a level-adjusted correction to the current standard relating anatomical cochlear place to characteristic frequency in humans, and to re-evaluate anatomical frequency mismatch in cochlear implant (CI) recipients considering this correction. It is hypothesized that a level-adjusted place-frequency function may represent a more accurate tonotopic benchmark for CIs in comparison to the current standard. Design: The present analytical study compiled data from fifteen previous animal studies that reported iso-intensity responses from cochlear structures at different stimulation levels. Extracted outcome measures were characteristic frequencies and centroid-based best frequencies at 70 dB SPL input from 47 specimens spanning a broad range of cochlear locations. A simple relationship was used to transform these measures to human estimates of characteristic and best frequencies, and non-linear regression was applied to these estimates to determine how the standard human place-frequency function should be adjusted to reflect best frequency rather than characteristic frequency. The proposed level-adjusted correction was then compared to average place-frequency positions of commonly used CI devices when programmed with clinical settings. Results: The present study showed that the best frequency at 70 dB SPL (BF70) tends to shift away from characteristic frequency (CF). The amount of shift was statistically significant (signed-rank test z = 5.143, p < 0.001), but the amount and direction of shift depended on cochlear location. At cochlear locations up to 600° from the base, BF70 shifted downwards in frequency relative to CF by about 4 semitones on average. Beyond 600° from the base, BF70 shifted upwards in frequency relative to CF by about 6 semitones on average. In terms of spread (90% prediction interval), the amount of shift between CF and BF70 varied from relatively no shift to nearly an octave of shift. With the new level-adjusted frequency-place function, the amount of anatomical frequency mismatch for devices programmed with standard of care settings is less extreme than originally thought, and may be nonexistent for all but the most apical electrodes. Conclusions: The present study validates the current standard for relating cochlear place to characteristic frequency, and introduces a level-adjusted correction for how best frequency shifts away from characteristic frequency at moderately loud stimulation levels. This correction may represent a more accurate tonotopic reference for CIs. To the extent that it does, its implementation may potentially enhance perceptual accommodation and speech understanding in CI users, thereby improving CI outcomes and contributing to advancements in the programming and clinical management of CIs.

2.
Otol Neurotol ; 42(10S): S2-S10, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34766938

ABSTRACT

HYPOTHESIS: This study tests the hypothesis that it is possible to find tone or noise vocoders that sound similar and result in similar speech perception scores to a cochlear implant (CI). This would validate the use of such vocoders as acoustic models of CIs. We further hypothesize that those valid acoustic models will require a personalized amount of frequency mismatch between input filters and output tones or noise bands. BACKGROUND: Noise or tone vocoders have been used as acoustic models of CIs in hundreds of publications but have never been convincingly validated. METHODS: Acoustic models were evaluated by single-sided deaf CI users who compared what they heard with the CI in one ear to what they heard with the acoustic model in the other ear. We evaluated frequency-matched models (both all-channel and 6-channel models, both tone and noise vocoders) as well as self-selected models that included an individualized level of frequency mismatch. RESULTS: Self-selected acoustic models resulted in similar levels of speech perception and similar perceptual quality as the CI. These models also matched the CI in terms of perceived intelligibility, harshness, and pleasantness. CONCLUSION: Valid acoustic models of CIs exist, but they are different from the models most widely used in the literature. Individual amounts of frequency mismatch may be required to optimize the validity of the model. This may be related to the basalward frequency mismatch experienced by postlingually deaf patients after cochlear implantation.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Acoustic Stimulation/methods , Acoustics , Cochlear Implantation/methods , Humans , Noise
3.
J Acoust Soc Am ; 150(4): 2316, 2021 10.
Article in English | MEDLINE | ID: mdl-34717490

ABSTRACT

Binaural unmasking, a key feature of normal binaural hearing, can refer to the improved intelligibility of masked speech by adding masking that facilitates perceived separation of target and masker. A question relevant for cochlear implant users with single-sided deafness (SSD-CI) is whether binaural unmasking can still be achieved if the additional masking is spectrally degraded and shifted. CIs restore some aspects of binaural hearing to these listeners, although binaural unmasking remains limited. Notably, these listeners may experience a mismatch between the frequency information perceived through the CI and that perceived by their normal hearing ear. Employing acoustic simulations of SSD-CI with normal hearing listeners, the present study confirms a previous simulation study that binaural unmasking is severely limited when interaural frequency mismatch between the input frequency range and simulated place of stimulation exceeds 1-2 mm. The present study also shows that binaural unmasking is largely retained when the input frequency range is adjusted to match simulated place of stimulation, even at the expense of removing low-frequency information. This result bears implications for the mechanisms driving the type of binaural unmasking of the present study and for mapping the frequency range of the CI speech processor in SSD-CI users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Deafness/diagnosis , Hearing , Humans
4.
Front Neurol ; 12: 724800, 2021.
Article in English | MEDLINE | ID: mdl-35087462

ABSTRACT

Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.

5.
Hear Res ; 370: 316-328, 2018 12.
Article in English | MEDLINE | ID: mdl-30396747

ABSTRACT

A potential bottleneck to improving speech perception performance in cochlear implant (CI) users is that some of their electrodes may poorly encode speech information. Several studies have examined the effect of deactivating poorly encoding electrodes on speech perception with mixed results. Many of these studies focused on identifying poorly encoding electrodes by some measure (e.g. electrode discrimination, pitch ordering, threshold, CT-guided, masked modulation detection), but provide inconsistent criteria about which electrodes, and how many, should be deactivated, and without considering how speech information becomes distributed across the electrode array. The present simulation study addresses this issue using computational approaches. Previously validated models were used to generate predictions of speech scores as a function of all possible combinations of active electrodes in a 22-electrode array in three groups of hypothetical subjects representative of relatively better, moderate, and poorer performing CI users. Using high-performance computing, over 500 million predictions were generated. Although deactivation of the poorest encoding electrodes sometimes resulted in predicted benefit, this benefit was significantly less relative to predictions resulting from model-optimized deactivations. This trend persisted when using novel stimuli (i.e. other than those used for optimization) and when using different processing strategies. Optimum electrode deactivation patterns produced an average predicted increase in word scores of 10% with some scores increasing by more than 20%. Optimum electrode deactivation patterns typically included 11 to 19 (out of 22) active electrodes, depending on the performance group. Optimal active electrode combinations were those that maximized discrimination of speech cues, maintaining 80%-100% of the physical span of the array. The present study demonstrates the potential for further improving CI users' speech scores with appropriate selection of active electrodes.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Computer Simulation , Hearing Loss/rehabilitation , Models, Theoretical , Persons With Hearing Impairments/rehabilitation , Signal Processing, Computer-Assisted , Speech Perception , Acoustic Stimulation , Cues , Electric Stimulation , Hearing Loss/physiopathology , Hearing Loss/psychology , Humans , Persons With Hearing Impairments/psychology
6.
Otol Neurotol ; 38(8): e253-e261, 2017 09.
Article in English | MEDLINE | ID: mdl-28806335

ABSTRACT

HYPOTHESIS: A novel smartphone-based software application can facilitate self-selection of frequency allocation tables (FAT) in postlingually deaf cochlear implant (CI) users. BACKGROUND: CIs use FATs to represent the tonotopic organization of a normal cochlea. Current CI fitting methods typically use a standard FAT for all patients regardless of individual differences in cochlear size and electrode location. In postlingually deaf patients, different amounts of mismatch can result between the frequency-place function they experienced when they had normal hearing and the frequency-place function that results from the standard FAT. For some CI users, an alternative FAT may enhance sound quality or speech perception. Currently, no widely available tools exist to aid real-time selection of different FATs. This study aims to develop a new smartphone tool for this purpose and to evaluate speech perception and sound quality measures in a pilot study of CI subjects using this application. METHODS: A smartphone application for a widely available mobile platform (iOS) was developed to serve as a preprocessor of auditory input to a clinical CI speech processor and enable interactive real-time selection of FATs. The application's output was validated by measuring electrodograms for various inputs. A pilot study was conducted in six CI subjects. Speech perception was evaluated using word recognition tests. RESULTS: All subjects successfully used the portable application with their clinical speech processors to experience different FATs while listening to running speech. The users were all able to select one table that they judged provided the best sound quality. All subjects chose a FAT different from the standard FAT in their everyday clinical processor. Using the smartphone application, the mean consonant-nucleus-consonant score with the default FAT selection was 28.5% (SD 16.8) and 29.5% (SD 16.4) when using a self-selected FAT. CONCLUSION: A portable smartphone application enables CI users to self-select frequency allocation tables in real time. Even though the self-selected FATs that were deemed to have better sound quality were only tested acutely (i.e., without long-term experience with them), speech perception scores were not inferior to those obtained with the clinical FATs. This software application may be a valuable tool for improving future methods of CI fitting.


Subject(s)
Cochlear Implants , Smartphone , Software , Adult , Aged , Auditory Perception , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments , Pilot Projects , Speech Perception , Young Adult
7.
J Acoust Soc Am ; 141(2): 1027, 2017 02.
Article in English | MEDLINE | ID: mdl-28253672

ABSTRACT

Cochlear implant (CI) recipients have difficulty understanding speech in noise even at moderate signal-to-noise ratios. Knowing the mechanisms they use to understand speech in noise may facilitate the search for better speech processing algorithms. In the present study, a computational model is used to assess whether CI users' vowel identification in noise can be explained by formant frequency cues (F1 and F2). Vowel identification was tested with 12 unilateral CI users in quiet and in noise. Formant cues were measured from vowels in each condition, specific to each subject's speech processor. Noise distorted the location of vowels in the F2 vs F1 plane in comparison to quiet. The best fit model to subjects' data in quiet produced model predictions in noise that were within 8% of actual scores on average. Predictions in noise were much better when assuming that subjects used a priori knowledge regarding how formant information is degraded in noise (experiment 1). However, the model's best fit to subjects' confusion matrices in noise was worse than in quiet, suggesting that CI users utilize formant cues to identify vowels in noise, but to a different extent than how they identify vowels in quiet (experiment 2).


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Aged , Algorithms , Cues , Electric Stimulation , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Signal Processing, Computer-Assisted
8.
J Acoust Soc Am ; 139(1): 1-11, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26826999

ABSTRACT

Even though speech signals trigger coding in the cochlea to convey speech information to the central auditory structures, little is known about the neural mechanisms involved in such processes. The purpose of this study was to understand the encoding of formant cues and how it relates to vowel recognition in listeners. Neural representations of formants may differ across listeners; however, it was hypothesized that neural patterns could still predict vowel recognition. To test the hypothesis, the frequency-following response (FFR) and vowel recognition were obtained from 38 normal-hearing listeners using four different vowels, allowing direct comparisons between behavioral and neural data in the same individuals. FFR was employed because it provides an objective and physiological measure of neural activity that can reflect formant encoding. A mathematical model was used to describe vowel confusion patterns based on the neural responses to vowel formant cues. The major findings were (1) there were large variations in the accuracy of vowel formant encoding across listeners as indexed by the FFR, (2) these variations were systematically related to vowel recognition performance, and (3) the mathematical model of vowel identification was successful in predicting good vs poor vowel identification performers based exclusively on physiological data.


Subject(s)
Recognition, Psychology/physiology , Speech Perception/physiology , Adult , Aged , Cochlea/physiology , Cues , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Middle Aged , Models, Neurological , Monte Carlo Method , Perceptual Masking/physiology , Phonetics , Speech Acoustics , Young Adult
9.
Acta Otolaryngol ; 135(4): 354-63, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25719506

ABSTRACT

CONCLUSION: The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses. OBJECTIVES: To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears. METHODS: Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: (1) pitch matching of electrodes in opposite ears, (2) listener-driven selection of the most intelligible frequency-to-electrode map, and (3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map. RESULTS: Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short insertion ear.


Subject(s)
Auditory Perception/physiology , Cochlear Implantation/methods , Cochlear Implants , Deafness/therapy , Adult , Deafness/physiopathology , Female , Humans
10.
Ear Hear ; 34(6): 763-72, 2013.
Article in English | MEDLINE | ID: mdl-23807089

ABSTRACT

OBJECTIVES: Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. DESIGN: Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded "most intelligible" to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a "frequency-matched" table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a "right information" table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. RESULTS: Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. CONCLUSIONS: Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.


Subject(s)
Acoustic Stimulation/methods , Audiology/methods , Auditory Perception/physiology , Cochlear Implants/standards , Deafness/rehabilitation , Speech Perception/physiology , Adult , Cochlear Implantation/methods , Cochlear Implantation/standards , Computer Simulation , Feasibility Studies , Female , Humans , Male , Middle Aged
11.
Article in English | MEDLINE | ID: mdl-25435816

ABSTRACT

Acoustic models have been used in numerous studies over the past thirty years to simulate the percepts elicited by auditory neural prostheses. In these acoustic models, incoming signals are processed the same way as in a cochlear implant speech processor. The percepts that would be caused by electrical stimulation in a real cochlear implant are simulated by modulating the amplitude of either noise bands or sinusoids. Despite their practical usefulness these acoustic models have never been convincingly validated. This study presents a tool to conduct such validation using subjects who have a cochlear implant in one ear and have near perfect hearing in the other ear, allowing for the first time a direct perceptual comparison of the output of acoustic models to the stimulation provided by a cochlear implant.

12.
J Am Acad Audiol ; 23(6): 422-37, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22668763

ABSTRACT

The Laboratory of Translational Auditory Research (LTAR/NYUSM) is part of the Department of Otolaryngology at the New York University School of Medicine and has close ties to the New York University Cochlear Implant Center. LTAR investigators have expertise in multiple related disciplines including speech and hearing science, audiology, engineering, and physiology. The lines of research in the laboratory deal mostly with speech perception by hearing impaired listeners, and particularly those who use cochlear implants (CIs) or hearing aids (HAs). Although the laboratory's research interests are diverse, there are common threads that permeate and tie all of its work. In particular, a strong interest in translational research underlies even the most basic studies carried out in the laboratory. Another important element is the development of engineering and computational tools, which range from mathematical models of speech perception to software and hardware that bypass clinical speech processors and stimulate cochlear implants directly, to novel ways of analyzing clinical outcomes data. If the appropriate tool to conduct an important experiment does not exist, we may work to develop it, either in house or in collaboration with academic or industrial partners. Another notable characteristic of the laboratory is its interdisciplinary nature where, for example, an audiologist and an engineer might work closely to develop an approach that would not have been feasible if each had worked singly on the project. Similarly, investigators with expertise in hearing aids and cochlear implants might join forces to study how human listeners integrate information provided by a CI and a HA. The following pages provide a flavor of the diversity and the commonalities of our research interests.


Subject(s)
Audiology , Cochlear Implantation , Cochlear Implants , Hearing Loss/therapy , Auditory Perception/physiology , Biomedical Technology , Hearing Loss/pathology , Hearing Loss/physiopathology , Humans , New York City , Translational Research, Biomedical , Universities
13.
J Acoust Soc Am ; 129(4): 2191-200, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21476674

ABSTRACT

The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.


Subject(s)
Cochlear Implants , Deafness/physiopathology , Models, Neurological , Phonetics , Speech Perception/physiology , Adolescent , Adult , Aged , Cues , Deafness/therapy , Humans , Middle Aged , Psychoacoustics , Young Adult
14.
J Acoust Soc Am ; 127(2): 1069-83, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20136228

ABSTRACT

A simple mathematical model is presented that predicts vowel identification by cochlear implant users based on these listeners' resolving power for the mean locations of first, second, and/or third formant energies along the implanted electrode array. This psychophysically based model provides hypotheses about the mechanism cochlear implant users employ to encode and process the input auditory signal to extract information relevant for identifying steady-state vowels. Using one free parameter, the model predicts most of the patterns of vowel confusions made by users of different cochlear implant devices and stimulation strategies, and who show widely different levels of speech perception (from near chance to near perfect). Furthermore, the model can predict results from the literature, such as Skinner, et al. [(1995). Ann. Otol. Rhinol. Laryngol. 104, 307-311] frequency mapping study, and the general trend in the vowel results of Zeng and Galvin's [(1999). Ear Hear. 20, 60-74] studies of output electrical dynamic range reduction. The implementation of the model presented here is specific to vowel identification by cochlear implant users, but the framework of the model is more general. Computational models such as the one presented here can be useful for advancing knowledge about speech perception in hearing impaired populations, and for providing a guide for clinical research and clinical practice.


Subject(s)
Cochlear Implants , Models, Neurological , Phonetics , Speech Perception , Acoustic Stimulation , Adult , Aged , Algorithms , Computer Simulation , Humans , Information Theory , Mathematical Concepts , Middle Aged , Psychoacoustics , Psycholinguistics , Speech , Young Adult
15.
J Assoc Res Otolaryngol ; 11(1): 69-78, 2010 Mar.
Article in English | MEDLINE | ID: mdl-19774412

ABSTRACT

In the present study, a computational model of phoneme identification was applied to data from a previous study, wherein cochlear implant (CI) users' adaption to a severely shifted frequency allocation map was assessed regularly over 3 months of continual use. This map provided more input filters below 1 kHz, but at the expense of introducing a downwards frequency shift of up to one octave in relation to the CI subjects' clinical maps. At the end of the 3-month study period, it was unclear whether subjects' asymptotic speech recognition performance represented a complete or partial adaptation. To clarify the matter, the computational model was applied to the CI subjects' vowel identification data in order to estimate the degree of adaptation, and to predict performance levels with complete adaptation to the frequency shift. Two model parameters were used to quantify this adaptation; one representing the listener's ability to shift their internal representation of how vowels should sound, and the other representing the listener's uncertainty in consistently recalling these representations. Two of the three CI users could shift their internal representations towards the new stimulation pattern within 1 week, whereas one could not do so completely even after 3 months. Subjects' uncertainty for recalling these representations increased substantially with the frequency-shifted map. Although this uncertainty decreased after 3 months, it remained much larger than subjects' uncertainty with their clinically assigned maps. This result suggests that subjects could not completely remap their phoneme labels, stored in long-term memory, towards the frequency-shifted vowels. The model also predicted that even with complete adaptation, the frequency-shifted map would not have resulted in improved speech understanding. Hence, the model presented here can be used to assess adaptation, and the anticipated gains in speech perception expected from changing a given CI device parameter.


Subject(s)
Adaptation, Physiological/physiology , Cochlear Implants , Models, Neurological , Phonetics , Speech Perception/physiology , Acoustic Stimulation , Adult , Audiometry, Speech , Humans , Middle Aged , Noise , Predictive Value of Tests
16.
J Speech Lang Hear Res ; 52(2): 385-95, 2009 Apr.
Article in English | MEDLINE | ID: mdl-18806216

ABSTRACT

PURPOSE: This study examined the ability of listeners using cochlear implants (CIs) and listeners with normal hearing (NH) to identify silent gaps of different duration and the relation of this ability to speech understanding in CI users. METHOD: Sixteen NH adults and 11 postlingually deafened adults with CIs identified synthetic vowel-like stimuli that were either continuous or contained an intervening silent gap ranging from 15 ms to 90 ms. Cumulative d', an index of discriminability, was calculated for each participant. Consonant and consonant-nucleus-consonant (CNC) word identification tasks were administered to the CI group. RESULTS: Overall, the ability to identify stimuli with gaps of different duration was better for the NH group than for the CI group. Seven CI users had cumulative d' scores that were no higher than those of any NH listener, and their CNC word scores ranged from 0% to 30%. The other 4 CI users had cumulative d' scores within the range of the NH group, and their CNC word scores ranged from 46% to 68%. For the CI group, cumulative d' scores were significantly correlated with their speech testing scores. CONCLUSIONS: The ability to identify silent gap duration may help explain individual differences in speech perception by CI users.


Subject(s)
Cochlear Implants , Speech Perception , Time Perception , Adult , Aged , Analysis of Variance , Humans , Middle Aged , Speech , Time , Young Adult
17.
J Acoust Soc Am ; 123(5): 2848-57, 2008 May.
Article in English | MEDLINE | ID: mdl-18529200

ABSTRACT

Information transfer analysis [G. A. Miller and P. E. Nicely, J. Acoust. Soc. Am. 27, 338-352 (1955)] is a tool used to measure the extent to which speech features are transmitted to a listener, e.g., duration or formant frequencies for vowels; voicing, place and manner of articulation for consonants. An information transfer of 100% occurs when no confusions arise between phonemes belonging to different feature categories, e.g., between voiced and voiceless consonants. Conversely, an information transfer of 0% occurs when performance is purely random. As asserted by Miller and Nicely, the maximum-likelihood estimate for information transfer is biased to overestimate its true value when the number of stimulus presentations is small. This small-sample bias is examined here for three cases: a model of random performance with pseudorandom data, a data set drawn from Miller and Nicely, and reported data from three studies of speech perception by hearing impaired listeners. The amount of overestimation can be substantial, depending on the number of samples, the size of the confusion matrix analyzed, as well as the manner in which data are partitioned therein.


Subject(s)
Communication , Hearing/physiology , Phonation , Speech/physiology , Bias , Humans , Information Dissemination/methods , Mathematics , Models, Biological , Probability , Speech Intelligibility
18.
Otol Neurotol ; 29(2): 168-73, 2008 Feb.
Article in English | MEDLINE | ID: mdl-18165793

ABSTRACT

OBJECTIVE: To assess word recognition and pitch-scaling abilities of cochlear implant users first implanted with a Nucleus 10-mm Hybrid electrode array and then reimplanted with a full length Nucleus Freedom array after loss of residual hearing. BACKGROUND: Although electroacoustic stimulation is a promising treatment for patients with residual low-frequency hearing,a small subset of them lose that residual hearing. It is not clear whether these patients would be better served by leaving in the 10-mm array and providing electric stimulation through it, or by replacing it with a standard full-length array. METHODS: Word recognition and pitch-scaling abilities were measured in 2 users of hybrid cochlear implants who lost their residual hearing in the implanted ear after a few months. Tests were repeated over several months, first with a 10-mm array, and after, these patients were reimplanted with a full array. The word recognition task consisted of 2 50-word consonant nucleus consonant (CNC) lists. In the pitch-scaling task, 6 electrodes were stimulated in pseudorandom order, and patients assigned a pitch value to the sensation elicited by each electrode. RESULTS: Shortly after reimplantation with the full electrode array, speech understanding was much better than with the 10-mm array. Patients improved their ability to perform the pitch-scaling task over time with the full array, although their performance on that task was variable, and the improvements were often small. CONCLUSION: 1) Short electrode arrays may help preserve residual hearing but may also provide less benefit than traditional cochlear implants for some patients. 2) Pitch percepts in response to electric stimulation may be modified by experience.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Loss/therapy , Aged , Aged, 80 and over , Audiometry , Electrodes , Humans , Male , Pitch Perception/physiology , Replantation , Speech Perception/physiology
19.
Ear Hear ; 28(4): 571-9, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17609617

ABSTRACT

OBJECTIVES: To examine the conclusions and possible misinterpretations that may or may not be drawn from the "outcome-matching method," a study design recently used in the cochlear implant literature. In this method, subject groups are matched not only on potentially confounding variables but also on an outcome measure that is closely related to the outcome measure under analysis. For example, subjects may be matched according to their speech perception scores in quiet, and their speech perception in noise is compared. DESIGN: The present study includes two components, a simulation study and a questionnaire. In the simulation study, the outcome-matching method was applied to pseudo-randomly generated data. Simulated speech perception scores in quiet and in noise were generated for two comparison groups, in two imaginary worlds. In both worlds, comparison group A performed only slightly worse in noise than in quiet, whereas comparison group B performed significantly worse in noise than in quiet. In Imaginary World 1, comparison group A had better speech perception scores than comparison group B. In Imaginary World 2, comparison group B had better speech perception scores than comparison group A. The outcome-matching method was applied to these data twice in each imaginary world: 1) matching scores in quiet and comparing in noise, and 2) matching scores in noise and comparing in quiet. This procedure was repeated 10,000 times. The second part of the study was conducted to address the level of misinterpretation that could arise from the outcome-matching method. A questionnaire was administered to 54 students in a senior level course on speech and hearing to assess their opinions about speech perception with two different models of cochlear implant devices. The students were instructed to fill out the questionnaire before and after reading a paper that used the outcome-matching method to examine speech perception in noise and in quiet with those two cochlear implant devices. RESULTS: When pseudorandom scores were matched in quiet, comparison group A's scores in noise were significantly better than comparison group B's scores. Results were different when scores were matched in noise: in this case, comparison group B's scores in quiet were significantly better than comparison group A's scores. Thus, the choice of outcome measure used for matching determined the result of the comparison. Additionally, results of the comparisons were identical regardless of whether they were conducted using data from Imaginary World 1 (where comparison group A is better) or from Imaginary World 2 (where comparison group B is better). After reading the paper that used the outcome-matching method, students' opinions about the two cochlear implants underwent a significant change even though, according to the simulation study, this opinion change was not warranted by the data. CONCLUSIONS: The outcome-matching method can provide important information about differences within a comparison group, but it cannot be used to determine whether a given device or clinical intervention is better than another one. Care must be used when interpreting the results of a study using the outcome-matching method.


Subject(s)
Cochlear Implants , Deafness/therapy , Speech Perception , Audiometry/instrumentation , Audiometry/statistics & numerical data , Computer Simulation , Humans , Noise/adverse effects , Surveys and Questionnaires
20.
Can J Exp Psychol ; 61(1): 64-70, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17479743

ABSTRACT

It is well known that discrimination response variability increases with stimulus intensity, closely related to Weber's Law. It is also an axiom that sensation magnitude increases with stimulus intensity. Following earlier researchers such as Thurstone, Garner, and Durlach and Braida, we explored a new method of exploiting these relationships to estimate the power function exponent relating sound pressure level to loudness, using the accuracy with which listeners could identify the intensity of pure tones. The log standard deviation of the normally distributed identification errors increases linearly with stimulus range in decibels, and the slope, a, of the regression is proportional to the loudness exponent, n. Interestingly, in a demonstration experiment, the loudness exponent estimated in this way is greater for females than for males.


Subject(s)
Discrimination, Psychological/physiology , Loudness Perception/physiology , Sex Characteristics , Weights and Measures , Acoustic Stimulation/methods , Auditory Threshold/physiology , Female , Humans , Male , Regression Analysis
SELECTION OF CITATIONS
SEARCH DETAIL