RESUMEN
BACKGROUND: The generally accepted method to assess the functionality of novel bone conduction implants in a preclinical stage is to experimentally measure the vibratory response of the cochlear promontory. Yet, bone conduction of sound is a complex propagation phenomenon, depending on both frequency and amplitude, involving different conduction pathways. OBJECTIVES: The aim of this study is to validate the use of intracochlear sound pressure (ICP) as an objective indicator for perceived loudness for bone conduction stimulation. It is investigated whether a correlation exists between intracochlear sound pressure measurements in cadaveric temporal bones and clinically obtained results using the outcome of a loudness balancing experiment. METHODS: Ten normal hearing subjects were asked to balance the perceived loudness between air conducted (AC) sound and bone conducted (BC) sound by changing the AC stimulus. Mean balanced thresholds were calculated and used as stimulation levels in a cadaver trial (N = 4) where intracochlear sound pressure was measured during AC and BC stimulation to assess the correlation with the measured clinical data. The intracochlear pressure was measured at the relatively low stimulation amplitude of 80 dBHL using a lock-in amplification technique. RESULTS: Applying AC and BC stimulation at equal perceived loudness on cadaveric heads yield a similar differential intracochlear pressure, with differences between AC and BC falling within the range of variability of normal hearing test subjects. CONCLUSION: Comparing the perceived loudness at 80 dB HL for both AC and BC validates intracochlear pressure as an objective indicator of the cochlear drive. The measurement setup is more time-intensive than measuring the vibratory response of the cochlear promontory, yet it provides direct information on the level of the cochlear scalae.
Asunto(s)
Conducción Ósea , Sonido , Humanos , Conducción Ósea/fisiología , Estimulación Acústica , Cóclea/fisiología , CadáverRESUMEN
The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.
Asunto(s)
Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Umbral Auditivo/fisiología , Nervio Coclear/fisiología , Femenino , Humanos , Masculino , Modelos Neurológicos , Modelos Psicológicos , Acústica del Lenguaje , Pruebas de Discriminación del Habla/métodos , Pruebas de Discriminación del Habla/estadística & datos numéricos , Adulto JovenRESUMEN
A language-independent automated self-test on tablet based on masked recognition of ecological sounds, the Sound Ear Check (SEC), was developed. In this test, 24 trials of eight different sounds are randomly presented in a noise that was spectrally shaped according to the average frequency spectra of the stimulus sounds, using a 1-up 2-down adaptive procedure. The test was evaluated in adults with normal hearing and hearing loss, and its feasibility was investigated in young children, who are the target population of this test. Following equalization of perceptual difficulty across sounds by applying level adjustments to the individual tokens, a reference curve with a steep slope of 18%/dB was obtained, resulting in a test with a high test-retest reliability of 1 dB. The SEC sound reception threshold was significantly associated with the averaged pure tone threshold (r = .70), as well as with the speech reception threshold for the Digit Triplet Test (r = .79), indicating that the SEC is susceptible to both audibility and signal-to-noise ratio loss. Sensitivity and specificity values on the order of magnitude of â¼70% and â¼80% to detect individuals with mild and moderate hearing loss, respectively, and â¼80% to detect individuals with slight speech-in-noise recognition difficulties were obtained. Homogeneity among sounds was verified in children. Psychometric functions fitted to the data indicated a steep slope of 16%/dB, and test-retest reliability of sound reception threshold estimates was 1.3 dB. A reference value of -9 dB signal-to-noise ratio was obtained. Test duration was around 6 minutes, including training and acclimatization.
Asunto(s)
Pérdida Auditiva/diagnóstico , Audición , Ruido , Reconocimiento en Psicología , Prueba del Umbral de Recepción del Habla/métodos , Estimulación Acústica/métodos , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valores de Referencia , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Relación Señal-Ruido , Percepción del Habla , Adulto JovenRESUMEN
The speech intelligibility benefit of visual speech cues during oral communication is well-established. Therefore, an ecologically valid approach of auditory assessment should include the processing of both auditory and visual speech cues. This study describes the development and evaluation of a virtual human speaker designed to present speech auditory-visually. A male and female virtual human speaker were created and evaluated in two experiments: a visual-only speech reading test of words and sentences and an auditory-visual speech intelligibility sentence test. A group of five hearing, skilled speech reading adults participated in the speech reading test whereas a group of young normal-hearing participants (N = 35) was recruited for the intelligibility test. Skilled speech readers correctly identified 57 to 67% of the words and sentences uttered by the virtual speakers. The presence of the virtual speaker improved the speech intelligibility of sentences in noise by 1.5 to 2 dB. These results demonstrate the potential applicability of virtual humans in future auditory-visual speech assessment paradigms.
Asunto(s)
Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Realidad Virtual , Calidad de la Voz , Estimulación Acústica , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Boca/fisiología , Movimiento , Estimulación Luminosa , Prueba del Umbral de Recepción del Habla , Percepción Visual , Adulto JovenRESUMEN
Peripheral hearing impairment cannot fully account for speech perception difficulties that emerge with advancing age. As the fluctuating speech envelope bears crucial information for speech perception, changes in temporal envelope processing are thought to contribute to degraded speech perception. Previous research has demonstrated changes in neural encoding of envelope modulations throughout the adult lifespan, either due to age or due to hearing impairment. To date, however, it remains unclear whether such age- and hearing-related neural changes are associated with impaired speech perception. In the present study, we investigated the potential relationship between perception of speech in different types of masking sounds and neural envelope encoding for a normal-hearing and hearing-impaired adult population including young (20-30 years), middle-aged (50-60 years), and older (70-80 years) people. Our analyses show that enhanced neural envelope encoding in the cortex and in the brainstem, respectively, is related to worse speech perception for normal-hearing and for hearing-impaired adults. This neural-behavioral correlation is found for the three age groups and appears to be independent of the type of masking noise, i.e., background noise or competing speech. These findings provide promising directions for future research aiming to develop advanced rehabilitation strategies for speech perception difficulties that emerge throughout adult life.
Asunto(s)
Corteza Auditiva/fisiopatología , Tronco Encefálico/fisiopatología , Potenciales Evocados Auditivos , Pérdida Auditiva/psicología , Personas con Deficiencia Auditiva/psicología , Percepción del Habla , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Estudios de Casos y Controles , Electroencefalografía , Potenciales Evocados Auditivos del Tronco Encefálico , Femenino , Audición , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Enmascaramiento Perceptual , Factores de Tiempo , Adulto JovenRESUMEN
OBJECTIVES: The aim of this study is to derive a consensus on an interdisciplinary competency framework regarding a holistic approach for audiological rehabilitation (AR), which includes disciplines from medicine, engineering, social sciences and humanities. DESIGN: We employed a modified Delphi method. In the first round survey, experts were asked to rate an initial list of 28 generic interdisciplinary competencies and to propose specific knowledge areas for AR. In the second round, experts were asked to reconsider their answers in light of the group answers of the first round. STUDY SAMPLE: An international panel of 27 experts from different disciplines in AR completed the first round. Twenty-two of them completed the second round. RESULTS: We developed a competency framework consisting of 21 generic interdisciplinary competencies grouped in five domains and nine specific competencies (knowledge areas) in three clusters. Suggestions for the implementation of the generic competencies in interdisciplinary programmes were identified. CONCLUSIONS: This study reveals insights into the interdisciplinary competencies that are unique for AR. The framework will be useful for educators in developing interdisciplinary programmes as well as for professionals in considering their lifelong training needs in AR.
Asunto(s)
Corrección de Deficiencia Auditiva/normas , Salud Holística/normas , Grupo de Atención al Paciente/normas , Competencia Profesional/normas , Consenso , Corrección de Deficiencia Auditiva/métodos , Técnica Delphi , HumanosRESUMEN
OBJECTIVE: Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. APPROACH: ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. MAIN RESULTS: For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. SIGNIFICANCE: We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.
Asunto(s)
Estimulación Acústica/métodos , Artefactos , Implantes Cocleares , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Análisis de Componente Principal/métodos , Implantación Coclear/métodos , Implantación Coclear/normas , Implantes Cocleares/normas , Bases de Datos Factuales , Estimulación Eléctrica/métodos , HumanosRESUMEN
OBJECTIVE: Binaural processing can be measured objectively as a desynchronisation of phase-locked neural activity to changes in interaural phase differences (IPDs). This was reported in a magnetoencephalography study for 40 Hz amplitude modulated tones. The goal of this study was to measure this desynchronisation using electroencephalography and explore the outcomes for different modulation frequencies. DESIGN: Auditory steady-state responses (ASSRs) were recorded to pure tones, amplitude modulated at 20, 40 or 80 Hz. IPDs switched between 0 and 180° at fixed time intervals. STUDY SAMPLE: Sixteen young listeners with bilateral normal hearing thresholds (≤25 dB HL at 125-8000 Hz) participated in this study. RESULTS: Significant ASSR phase desynchronisations to IPD changes were detected in 14 out of 16 participants for 40 Hz and in 8, respectively 9, out of 13 participants for 20 and 80 Hz modulators. Desynchronisation and restoration of ASSR phase took place significantly faster for 80 Hz than for 40 and 20 Hz. CONCLUSIONS: ASSR desynchronisation to IPD changes was successfully recorded using electroencephalography. It was feasible for 20, 40 and 80 Hz modulators and could be an objective tool to assess processing of changes in binaural information.
Asunto(s)
Corteza Auditiva/fisiología , Sincronización Cortical , Señales (Psicología) , Audición , Localización de Sonidos , Estimulación Acústica , Adolescente , Adulto , Audiometría de Tonos Puros , Umbral Auditivo , Potenciales Evocados Auditivos , Femenino , Humanos , Magnetoencefalografía , Masculino , Factores de Tiempo , Adulto JovenRESUMEN
Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies.
Asunto(s)
Estimulación Acústica , Vías Auditivas/fisiología , Percepción de la Altura Tonal/fisiología , Adulto , Algoritmos , Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/fisiología , Análisis por Conglomerados , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Lateralidad Funcional/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Distribución Normal , Adulto JovenRESUMEN
As people grow older, speech perception difficulties become highly prevalent, especially in noisy listening situations. Moreover, it is assumed that speech intelligibility is more affected in the event of background noises that induce a higher cognitive load, i.e., noises that result in informational versus energetic masking. There is ample evidence showing that speech perception problems in aging persons are partly due to hearing impairment and partly due to age-related declines in cognition and suprathreshold auditory processing. In order to develop effective rehabilitation strategies, it is indispensable to know how these different degrading factors act upon speech perception. This implies disentangling effects of hearing impairment versus age and examining the interplay between both factors in different background noises of everyday settings. To that end, we investigated open-set sentence identification in six participant groups: a young (20-30 years), middle-aged (50-60 years), and older cohort (70-80 years), each including persons who had normal audiometric thresholds up to at least 4 kHz, on the one hand, and persons who were diagnosed with elevated audiometric thresholds, on the other hand. All participants were screened for (mild) cognitive impairment. We applied stationary and amplitude modulated speech-weighted noise, which are two types of energetic maskers, and unintelligible speech, which causes informational masking in addition to energetic masking. By means of these different background noises, we could look into speech perception performance in listening situations with a low and high cognitive load, respectively. Our results indicate that, even when audiometric thresholds are within normal limits up to 4 kHz, irrespective of threshold elevations at higher frequencies, and there is no indication of even mild cognitive impairment, masked speech perception declines by middle age and decreases further on to older age. The impact of hearing impairment is as detrimental for young and middle-aged as it is for older adults. When the background noise becomes cognitively more demanding, there is a larger decline in speech perception, due to age or hearing impairment. Hearing impairment seems to be the main factor underlying speech perception problems in background noises that cause energetic masking. However, in the event of informational masking, which induces a higher cognitive load, age appears to explain a significant part of the communicative impairment as well. We suggest that the degrading effect of age is mediated by deficiencies in temporal processing and central executive functions. This study may contribute to the improvement of auditory rehabilitation programs aiming to prevent aging persons from missing out on conversations, which, in turn, will improve their quality of life.
Asunto(s)
Envejecimiento/psicología , Trastornos de la Audición/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Audiometría del Habla , Umbral Auditivo , Envejecimiento Cognitivo , Disfunción Cognitiva/psicología , Femenino , Audición , Trastornos de la Audición/diagnóstico , Trastornos de la Audición/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Calidad de Vida , Factores de Riesgo , Adulto JovenRESUMEN
As a result of neonatal hearing screening and subsequent early cochlear implantation (CI) profoundly deaf children have access to important information to process auditory signals and master spoken language skills at a young age. Nevertheless, auditory, linguistic and cognitive outcome measures still reveal great variability in individual achievements: some children with CI(s) perform within normal limits, while others lag behind. Understanding the causes of this variation would allow clinicians to offer better prognoses to CI candidates and efficient follow-up and rehabilitation. This paper summarizes what we can expect of normally developing children with CI(s) with regard to spoken language, bilateral and binaural auditory perception, speech perception and cognitive skills. Predictive factors of performance and factors influencing variability are presented, as well as some novel data on cognitive functioning and speech perception in quiet and in noise. Subsequently, we discuss technical and non-technical issues which should be considered in the future in order to optimally guide the child with profound hearing difficulties. This article is part of a Special Issue entitled
Asunto(s)
Percepción Auditiva , Implantación Coclear/instrumentación , Implantes Cocleares , Cognición , Sordera/rehabilitación , Niños con Discapacidad/rehabilitación , Intervención Médica Temprana , Desarrollo del Lenguaje , Personas con Deficiencia Auditiva/rehabilitación , Estimulación Acústica , Factores de Edad , Vías Auditivas/fisiopatología , Preescolar , Sordera/diagnóstico , Sordera/fisiopatología , Sordera/psicología , Niños con Discapacidad/psicología , Diagnóstico Precoz , Estimulación Eléctrica , Pruebas Auditivas , Humanos , Lactante , Recién Nacido , Tamizaje Neonatal/métodos , Plasticidad Neuronal , Personas con Deficiencia Auditiva/psicología , Localización de Sonidos , Percepción del HablaRESUMEN
OBJECTIVE: Recently, the digit triplet test was shown to be a sensitive speech-in-noise test for early high-frequency hearing loss in noise-exposed workers. This study investigates if a further improvement is achieved when using a closed set of consonant-vowel-consonant (CVC) speech items with the same vowel, and/or a low-pass (LP) filtered version of the standard speech-shaped noise. DESIGN: Speech reception thresholds in noise were gathered for the digit triplet, CVC, and CVC_LP test and compared to the high-frequency pure-tone average (PTA). STUDY SAMPLE: 118 noise-exposed workers showing a wide range of high-frequency hearing losses. RESULTS: For the 84 Dutch-speaking participants, the CVC test showed an increased measurement error and a decreased between-subject variation, leading to a weaker correlation with the PTA2,3,4,6 (R = 0.64) and thus a lower sensitivity compared to the digit triplet test (R = 0.86). However, the use of LP-filtered noise resulted in a sensitivity improvement (R = 0.79 versus R = 0.64) due to the large increase in between-subject spread. Similar trends were found for the 34 French-speaking workers. CONCLUSIONS: Using CVC words with the same vowel could not increase the sensitivity to detect isolated high-frequency hearing loss. With LP-filtered noise, test sensitivity improved, but it did not surpass the original digit triplet test.
Asunto(s)
Pérdida Auditiva Provocada por Ruido/diagnóstico , Ruido/efectos adversos , Enfermedades Profesionales/diagnóstico , Exposición Profesional/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/psicología , Percepción del Habla , Prueba del Umbral de Recepción del Habla , Estimulación Acústica , Adulto , Femenino , Pérdida Auditiva Provocada por Ruido/etiología , Pérdida Auditiva Provocada por Ruido/psicología , Humanos , Modelos Lineales , Masculino , Persona de Mediana Edad , Enfermedades Profesionales/etiología , Enfermedades Profesionales/psicología , Valor Predictivo de las Pruebas , Psicoacústica , Acústica del Lenguaje , Adulto JovenRESUMEN
OBJECTIVE: In addition to the LIST with a female speaker ( van Wieringen & Wouters, 2008 ), a new speech perception test with a male voice was developed and validated, for evaluating the intelligibility performance of cochlear implant (CI) users or severely hearing-impaired persons. DESIGN: Three experimental steps were carried out: (1) a perceptual optimization of the recorded materials, (2) an evaluation in normal-hearing (NH) listeners, and (3) a validation in CI-users. Measurements were performed both in quiet and in noise. STUDY SAMPLE: Forty-four NH subjects and six CI-users participated. RESULTS: After selecting the sentences with a similar intelligibility, the reference psychometric curve for NH listeners was determined, showing steep slopes for measurements in quiet (12.3%/dB) and in noise (18.7%/dB), similar to the LIST with female voice. The 38 lists of 10 sentences yielded equal scores, and the within-subject test-retest reliability was high (1.7 dB in quiet, 1.1 dB in noise). For the CI-users, parallel psychometric curves were found between the LIST with male and female voice. CONCLUSIONS: The LIST-m is a reliable and valid speech intelligibility test that can be used for CI-users, both in quiet and in noise.
Asunto(s)
Estimulación Acústica/métodos , Audiometría del Habla/métodos , Personas con Deficiencia Auditiva/psicología , Inteligibilidad del Habla , Percepción del Habla , Adolescente , Adulto , Anciano , Umbral Auditivo , Estudios de Casos y Controles , Implantación Coclear/instrumentación , Implantes Cocleares , Corrección de Deficiencia Auditiva/instrumentación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Valor Predictivo de las Pruebas , Psicometría , Reproducibilidad de los Resultados , Factores Sexuales , Adulto JovenRESUMEN
A cochlear implant (CI) signal processing strategy named F0 modulation (F0mod) was compared with the advanced combination encoder (ACE) strategy in a group of four post-lingually deafened Mandarin Chinese speaking CI listeners. F0 provides an enhanced temporal pitch cue by amplitude modulating the multichannel electrical stimulation pattern at the fundamental frequency (F0) of the incoming speech signal. Word and sentence recognition tests were carried out in quiet and in noise. The responses for the word-recognition test were further segmented into phoneme and tone scores. Off-line implementations of ACE and F0mod were used, and electrical stimulation patterns were directly streamed to the CI subject's implant. To focus on the feasibility of enhanced temporal cues for tonal language perception, idealized F0 information that was extracted from speech tokens in quiet was used in the F0mod processing of speech-in-noise mixtures. The results indicated significantly better lexical tone perception with the F0mod strategy than with ACE for the male voice (p<0.05). No significant differences in sentence recognition were found between F0mod and ACE.
Asunto(s)
Implantes Cocleares , Lenguaje , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Anciano , Algoritmos , Pueblo Asiatico , Implantes Cocleares/estadística & datos numéricos , Señales (Psicología) , Sordera/fisiopatología , Sordera/psicología , Sordera/terapia , Femenino , Humanos , Lingüística , Masculino , Persona de Mediana Edad , Ruido , Percepción de la Altura Tonal/fisiología , Psicoacústica , Procesamiento de Señales Asistido por ComputadorRESUMEN
OBJECTIVE: To investigate the extent to which temporal gaps, temporal fine structure, and comprehensibility of the masker affect masking strength in speech recognition experiments. DESIGN: Seven different masker types with Dutch speech materials were evaluated. Amongst these maskers were the ICRA-5 fluctuating noise, the international speech test signal (ISTS), and competing talkers in Dutch and Swedish. STUDY SAMPLE: Normal-hearing and hearing-impaired subjects. RESULTS: The normal-hearing subjects benefited from both temporal gaps and temporal fine structure in the fluctuating maskers. When the competing talker was comprehensible, performance decreased. The ISTS masker appeared to cause a large informational masking component. The stationary maskers yielded the steepest slopes of the psychometric function, followed by the modulated noises, followed by the competing talkers. Although the hearing-impaired group was heterogeneous, their data showed similar tendencies, but sometimes to a lesser extent, depending on individuals' hearing impairment. CONCLUSIONS: If measurement time is of primary concern non-modulated maskers are advised. If it is useful to assess release of masking by the use of temporal gaps, a fluctuating noise is advised. If perception of temporal fine structure is being investigated, a foreign-language competing talker is advised.
Asunto(s)
Pérdida Auditiva Sensorineural/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Reconocimiento en Psicología , Percepción del Habla , Prueba del Umbral de Recepción del Habla , Estimulación Acústica , Adolescente , Anciano , Audiometría de Tonos Puros , Umbral Auditivo , Estudios de Casos y Controles , Señales (Psicología) , Femenino , Humanos , Lenguaje , Masculino , Persona de Mediana Edad , Psicoacústica , Espectrografía del Sonido , Acústica del Lenguaje , Inteligibilidad del Habla , Factores de Tiempo , Adulto JovenRESUMEN
This study describes the heritability of audiometric shape parameters and the familial aggregation of different types of presbycusis in a healthy, otologically screened population between 50 and 75 years old. About 342 siblings of 64 families (average family-size: 5.3) were recruited through population registries. Audiometric shape was mathematically quantified by objective parameters developed to measure size, slope, concavity, percentage of frequency-dependent and frequency-independent hearing loss and Bulge Depth. The heritability of each parameter was calculated using a variance components model. Logistic regression models were used to estimate the odds ratios (ORs). Estimates of sibling recurrence risk ratios (lambda(s)) are also provided. Heritability estimates were generally higher compared to previous studies. ORs and lambda(s) for the parameters Total Hearing Loss (size), Uniform Hearing Loss (percentage of frequency-dependent hearing loss) and Bulge Depth suggest a higher heredity for severe types of presbycusis compared to moderate or mild types. Our results suggest that the separation of the parameter 'Total Hearing Loss' into the two parameters 'Uniform Hearing Loss' and 'Non-uniform Hearing Loss' could lead to the discovery of different genetic subtypes of presbycusis. The parameter 'Bulge Depth', instead of 'Concavity', seemed to be an important parameter for classifying subjects into 'susceptible' or 'resistant' to societal or intensive environmental exposure.
Asunto(s)
Audición/genética , Presbiacusia/genética , Estimulación Acústica , Factores de Edad , Anciano , Audiometría , Umbral Auditivo , Bélgica , Conducción Ósea/genética , Femenino , Predisposición Genética a la Enfermedad , Herencia , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Modelos Biológicos , Oportunidad Relativa , Linaje , Fenotipo , Presbiacusia/diagnóstico , Presbiacusia/fisiopatología , Medición de Riesgo , Factores de Riesgo , Índice de Severidad de la Enfermedad , Factores Sexuales , HermanosRESUMEN
Two forward-masking experiments were conducted with six cochlear implant listeners to test whether asymmetric pulse shapes would improve the place-specificity of stimulation compared to symmetric ones. The maskers were either cathodic-first symmetric biphasic, pseudomonophasic (i.e., with a second anodic phase longer and lower in amplitude than the first phase), or "delayed pseudomonophasic" (identical to pseudomonophasic but with an inter-phase gap) stimuli. In experiment 1, forward-masking patterns for monopolar maskers were obtained by keeping each masker fixed on a middle electrode of the array and measuring the masked thresholds of a monopolar signal presented on several other electrodes. The results were very variable, and no difference between pulse shapes was found. In experiment 2, six maskers were used in a wide bipolar (bipolar+9) configuration: the same three pulse shapes as in experiment 1, either cathodic-first relative to the most apical or relative to the most basal electrode of the bipolar channel. The pseudomonophasic masker showed a stronger excitation proximal to the electrode of the bipolar pair for which the short, high-amplitude phase was anodic. However, no difference was obtained with the symmetric and, more surprisingly, with the delayed pseudomonophasic maskers. Implications for cochlear implant design are discussed.
Asunto(s)
Acústica , Percepción Auditiva , Implantes Cocleares , Enmascaramiento Perceptual , Estimulación Acústica , Adulto , Anciano , Análisis de Varianza , Umbral Auditivo , Sordera/fisiopatología , Sordera/terapia , Humanos , Persona de Mediana Edad , Neuronas/fisiología , Psicoacústica , Procesamiento de Señales Asistido por Computador , Factores de TiempoRESUMEN
A new signal processing algorithm for improved pitch perception in cochlear implants is proposed. The algorithm realizes fundamental frequency (F0) coding by explicitly modulating the amplitude of the electrical stimulus. The proposed processing scheme is compared with the standard advanced combination encoder strategy in psychophysical music perception related tasks. Possible filter-bank and loudness cues between the strategies under study were minimized to predominantly focus on differences in temporal processing. The results demonstrate significant benefits provided by the new coding strategy for pitch ranking, melodic contour identification, and familiar melody identification.
Asunto(s)
Algoritmos , Implantes Cocleares , Percepción de la Altura Tonal , Diseño de Prótesis , Estimulación Acústica , Análisis de Varianza , Percepción Auditiva , Estimulación Eléctrica , Humanos , Música , Psicoacústica , Reconocimiento en Psicología , Análisis y Desempeño de Tareas , Factores de TiempoRESUMEN
APEX 3 is a software test platform for auditory behavioral experiments. It provides a generic means of setting up experiments without any programming. The supported output devices include sound cards and cochlear implants from Cochlear Corporation and Advanced Bionics Corporation. Many psychophysical procedures are provided and there is an interface to add custom procedures. Plug-in interfaces are provided for data filters and external controllers. APEX 3 is supported under Linux and Windows and is available free of charge.
Asunto(s)
Estimulación Acústica/métodos , Audiometría/métodos , Implantes Cocleares/tendencias , Computadores/tendencias , Psicofísica/métodos , Programas Informáticos/tendencias , Estimulación Acústica/instrumentación , Acústica/instrumentación , Algoritmos , Audiometría/instrumentación , Percepción Auditiva/fisiología , Humanos , Neurofisiología/instrumentación , Neurofisiología/métodos , Psicofísica/instrumentación , Interfaz Usuario-ComputadorRESUMEN
The general magnocellular theory postulates that dyslexia is the consequence of a multimodal deficit in the processing of transient and dynamic stimuli. In the auditory modality, this deficit has been hypothesized to interfere with accurate speech perception, and subsequently disrupt the development of phonological and later reading and spelling skills. In the visual modality, an analogous problem might interfere with literacy development by affecting orthographic skills. In this prospective longitudinal study, we tested dynamic auditory and visual processing, speech-in-noise perception, phonological ability and orthographic ability in 62 five-year-old preschool children. Predictive relations towards first grade reading and spelling measures were explored and the validity of the global magnocellular model was evaluated using causal path analysis. In particular, we demonstrated that dynamic auditory processing was related to speech perception, which itself was related to phonological awareness. Similarly, dynamic visual processing was related to orthographic ability. Subsequently, phonological awareness, orthographic ability and verbal short-term memory were unique predictors of reading and spelling development.