Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Ear Hear ; 43(3): 862-873, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34812791

RESUMEN

OBJECTIVES: Variations in loudness are a fundamental component of the music listening experience. Cochlear implant (CI) processing, including amplitude compression, and a degraded auditory system may further degrade these loudness cues and decrease the enjoyment of music listening. This study aimed to identify optimal CI sound processor compression settings to improve music sound quality for CI users. DESIGN: Fourteen adult MED-EL CI recipients participated (Experiment No. 1: n = 17 ears; Experiment No. 2: n = 11 ears) in the study. A software application using a modified comparison category rating (CCR) test method allowed participants to compare and rate the sound quality of various CI compression settings while listening to 25 real-world music clips. The two compression settings studied were (1) Maplaw, which informs audibility and compression of soft level sounds, and (2) automatic gain control (AGC), which applies compression to loud sounds. For each experiment, one compression setting (Maplaw or AGC) was held at the default, while the other was varied according to the values available in the clinical CI programming software. Experiment No. 1 compared Maplaw settings of 500, 1000 (default), and 2000. Experiment No. 2 compared AGC settings of 2.5:1, 3:1 (default), and 3.5:1. RESULTS: In Experiment No. 1, the group preferred a higher Maplaw setting of 2000 over the default Maplaw setting of 1000 (p = 0.003) for music listening. There was no significant difference in music sound quality between the Maplaw setting of 500 and the default setting (p = 0.278). In Experiment No. 2, a main effect of AGC setting was found; however, no significant difference in sound quality ratings for pairwise comparisons were found between the experimental settings and the default setting (2.5:1 versus 3:1 at p = 0.546; 3.5:1 versus 3:1 at p = 0.059). CONCLUSIONS: CI users reported improvements in music sound quality with higher than default Maplaw or AGC settings. Thus, participants preferred slightly higher compression for music listening, with results having clinical implications for improving music perception in CI users.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Música , Adulto , Percepción Auditiva , Sordera/rehabilitación , Humanos , Sonido
2.
Ear Hear ; 41(5): 1372-1382, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32149924

RESUMEN

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Adolescente , Adulto , Niño , Emociones , Femenino , Humanos , Masculino , Habla , Adulto Joven
3.
Ear Hear ; 40(5): 1197-1209, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30762600

RESUMEN

OBJECTIVE: Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN: Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS: Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS: The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.


Asunto(s)
Sordera/rehabilitación , Emociones , Potenciales Evocados Auditivos/fisiología , Música , Percepción del Habla , Adolescente , Adulto , Anciano , Implantes Cocleares , Sordera/fisiopatología , Sordera/psicología , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción , Voz , Adulto Joven
4.
J Acoust Soc Am ; 145(2): 847, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30823786

RESUMEN

In cocktail-party situations, listeners can use the fundamental frequency (F0) of a voice to segregate it from competitors, but other cues in speech could help, such as co-modulation of envelopes across frequency or more complex cues related to the semantic/syntactic content of the utterances. For simplicity, this (non-pitch) form of grouping is referred to as "articulatory." By creating a new type of speech with two steady F0s, it was examined how these two forms of segregation compete: articulatory grouping would bind the partials of a double-F0 source together, whereas harmonic segregation would tend to split them in two subsets. In experiment 1, maskers were two same-male sentences. Speech reception thresholds were high in this task (vicinity of 0 dB), and harmonic segregation behaved as though double-F0 stimuli were two independent sources. This was not the case in experiment 2, where maskers were speech-shaped complexes (buzzes). First, double-F0 targets were immune to the masking of a single-F0 buzz matching one of the two target F0s. Second, double-F0 buzzes were particularly effective at masking a single-F0 target matching one of the two buzz F0s. As a conclusion, the strength of F0-segregation appears to depend on whether the masker is speech or not.

5.
J Acoust Soc Am ; 142(4): 1739, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29092612

RESUMEN

Musicians can sometimes achieve better speech recognition in noisy backgrounds than non-musicians, a phenomenon referred to as the "musician advantage effect." In addition, musicians are known to possess a finer sense of pitch than non-musicians. The present study examined the hypothesis that the latter fact could explain the former. Four experiments measured speech reception threshold for a target voice against speech or non-speech maskers. Although differences in fundamental frequency (ΔF0s) were shown to be beneficial even when presented to opposite ears (experiment 1), the authors' attempt to maximize their use by directing the listener's attention to the target F0 led to unexpected impairments (experiment 2) and the authors' attempt to hinder their use by generating uncertainty about the competing F0s led to practically negligible effects (experiments 3 and 4). The benefits drawn from ΔF0s showed surprisingly little malleability for a cue that can be used in the complete absence of energetic masking. In half of the experiments, musicians obtained better thresholds than non-musicians, particularly in speech-on-speech conditions, but they did not reliably obtain larger ΔF0 benefits. Thus, the data do not support the hypothesis that the musician advantage effect is based on greater ability to exploit ΔF0s.


Asunto(s)
Señales (Psicología) , Música , Ocupaciones , Percepción de la Altura Tonal , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adolescente , Adulto , Atención , Femenino , Humanos , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Discriminación de la Altura Tonal , Reconocimiento en Psicología , Prueba del Umbral de Recepción del Habla , Adulto Joven
6.
J Acoust Soc Am ; 136(5): 2726-36, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25373972

RESUMEN

When phase relationships between partials of a complex masker produce highly modulated temporal envelopes on the basilar membrane, listeners may detect speech information from temporal dips in the within-channel masker envelopes. This source of masking release (MR) is however located in regions of unresolved masker partials and it is unclear how much of the speech information in these regions is really needed for intelligibility. Also, other sources of MR such as glimpsing in between resolved masker partials may provide sufficient information from regions that disregard phase relationships. This study simplified the problem of speech recognition to a masked detection task. Target bands of speech-shaped noise were restricted to frequency regions containing either only resolved or only unresolved masker partials, as a function of masker phase relationships (sine or random), masker fundamental frequency (F0) (50, 100, or 200 Hz), and masker spectral profile (flat-spectrum or speech-shaped). Although masker phase effects could be observed in unresolved regions at F0s of 50 and 100 Hz, it was only at 50-Hz F0 that detection thresholds were ever lower in unresolved than in resolved regions, suggesting little role of envelope modulations for harmonic complexes with F0s in the human voice range and at moderate level.


Asunto(s)
Ruido , Enmascaramiento Perceptual/fisiología , Fonética , Inteligibilidad del Habla , Adulto , Umbral Diferencial , Humanos , Modelos Teóricos , Patrones de Reconocimiento Fisiológico , Periodicidad , Sonido , Acústica del Lenguaje , Adulto Joven
7.
J Acoust Soc Am ; 136(3): 1225, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25190396

RESUMEN

Intelligibility of a target voice improves when its fundamental frequency (F0) differs from that of a masking voice, but it remains unclear how this masking release (MR) depends on the two relative F0s. Three experiments measured speech reception thresholds (SRTs) for a target voice against different maskers. Experiment 1 evaluated the influence of target F0 itself. SRTs against white noise were elevated by at least 2 dB for a monotonized target voice compared with the unprocessed voice, but SRTs differed little for F0s between 50 and 150 Hz. In experiments 2 and 3, a MR occurred when there was a steady difference in F0 between the target voice and a stationary speech-shaped harmonic complex or a babble. However, this MR was considerably larger when the F0 of the masker was 11 semitones above the target F0 than when it was 11 semitones below. In contrast, for a fixed masker F0, the MR was similar whether the target F0 was above or below. The dependency of these MRs on the masker F0 suggests that a spectral mechanism such as glimpsing in between resolved masker partials may account for an important part of this phenomenon.


Asunto(s)
Ruido/efectos adversos , Enmascaramiento Perceptual , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Umbral Auditivo , Humanos , Persona de Mediana Edad , Prueba del Umbral de Recepción del Habla , Factores de Tiempo , Adulto Joven
8.
J Acoust Soc Am ; 135(5): 2873-84, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24815268

RESUMEN

Speech recognition in a complex masker usually benefits from masker harmonicity, but there are several factors at work. The present study focused on two of them, glimpsing spectrally in between masker partials and periodicity within individual frequency channels. Using both a theoretical and an experimental approach, it is demonstrated that when inharmonic complexes are generated by jittering partials from their harmonic positions, there are better opportunities for spectral glimpsing in inharmonic than in harmonic maskers, and this difference is enhanced as fundamental frequency (F0) increases. As a result, measurements of masking level difference between the two maskers can be reduced, particularly at higher F0s. Using inharmonic maskers that offer similar glimpsing opportunity to harmonic maskers, it was found that the masking level difference between the two maskers varied little with F0, was influenced by periodicity of the first four partials, and could occur in low-, mid-, or high-frequency regions. Overall, the present results suggested that both spectral glimpsing and periodicity contribute to speech recognition under masking by harmonic complexes, and these effects seem independent from one another.


Asunto(s)
Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Adulto , Umbral Auditivo , Humanos , Persona de Mediana Edad , Periodicidad , Fonética , Espectrografía del Sonido , Inteligibilidad del Habla , Adulto Joven
9.
Brain Commun ; 6(3): fcae175, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38846536

RESUMEN

Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.

10.
J Acoust Soc Am ; 134(5): EL465-70, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24181992

RESUMEN

Speech reception thresholds were measured for a voice against two different maskers: Either two concurrent voices with the same fundamental frequency (F0) or a harmonic complex with the same long-term excitation pattern and broadband temporal envelope as the masking sentences (speech-modulated buzz). All sources had steady F0s. A difference in F0 of 2 or 8 semitones provided a 5-dB benefit for buzz maskers, whereas it provided a 3- and 8-dB benefit, respectively, for masking sentences. Whether intelligibility of a voice increases abruptly with small ΔF0s or gradually toward larger ΔF0s seems to depend on the nature of the masker.


Asunto(s)
Ruido/efectos adversos , Patrones de Reconocimiento Fisiológico , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Umbral Auditivo , Señales (Psicología) , Humanos , Masculino , Reconocimiento en Psicología , Prueba del Umbral de Recepción del Habla , Adulto Joven
11.
J Voice ; 37(3): 466.e1-466.e15, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-33745802

RESUMEN

OBJECTIVE: Using voice to speak or to sing is made possible by remarkably complex sensorimotor processes. Like any other sensorimotor system, the speech motor controller guides its actions with maximum performance at minimum cost, using available sources of information, among which, auditory feedback plays a major role. Manipulation of this feedback forces the speech monitoring system to refine its expectations for further actions. The present study hypothesizes that the duration of this refinement and the weight applied on different feedbacks loops would depend on the intended sounds to be produced, namely reading aloud versus singing. MATERIAL AND METHODS: We asked participants to sing "Happy Birthday" and read a paragraph of Harry Potter before and after experiencing pitch-shifted feedback. A detailed fundamental frequency (F0) analysis was conducted for each note in the song and each segment in the paragraph (at the level of a sentence, a word, or a vowel) to determine whether some aspects of F0 production changed in response to the pitch perturbations experienced during the adaptation paradigm. RESULTS: Our results showed that changes in the degree of F0-drift across the song or the paragraph was the metric that was the most consistent with a carry-over effect of adaptation, and in this regard, reading new material was more influenced by recent remapping than singing. CONCLUSION: The motor commands used by (normally-hearing) speakers are malleable via altered-feedback paradigms, perhaps more so when reading aloud than when singing. But these effects are not revealed through simple indicators such as an overall change in mean F0 or F0 range, but rather through subtle metrics, such as a drift of the voice pitch across the recordings.


Asunto(s)
Canto , Voz , Humanos , Retroalimentación , Voz/fisiología , Habla/fisiología , Retroalimentación Sensorial/fisiología , Percepción de la Altura Tonal/fisiología
12.
Front Psychol ; 14: 1046672, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37205083

RESUMEN

Introduction: A singer's or speaker's Fach (voice type) should be appraised based on acoustic cues characterizing their voice. Instead, in practice, it is often influenced by the individual's physical appearance. This is especially distressful for transgender people who may be excluded from formal singing because of perceived mismatch between their voice and appearance. To eventually break down these visual biases, we need a better understanding of the conditions under which they occur. Specifically, we hypothesized that trans listeners (not actors) would be better able to resist such biases, relative to cis listeners, precisely because they would be more aware of appearance-voice dissociations. Methods: In an online study, 85 cisgender and 81 transgender participants were presented with 18 different actors singing or speaking short sentences. These actors covered six voice categories from high/bright (traditionally feminine) to low/dark (traditionally masculine) voices: namely soprano, mezzo-soprano (referred to henceforth as mezzo), contralto (referred to henceforth as alto), tenor, baritone, and bass. Every participant provided voice type ratings for (1) Audio-only (A) stimuli to get an unbiased estimate of a given actor's voice type, (2) Video-only (V) stimuli to get an estimate of the strength of the bias itself, and (3) combined Audio-Visual (AV) stimuli to see how much visual cues would affect the evaluation of the audio. Results: Results demonstrated that visual biases are not subtle and hold across the entire scale, shifting voice appraisal by about a third of the distance between adjacent voice types (for example, a third of the bass-to-baritone distance). This shift was 30% smaller for trans than for cis listeners, confirming our main hypothesis. This pattern was largely similar whether actors sung or spoke, though singing overall led to more feminine/high/bright ratings. Conclusion: This study is one of the first demonstrations that transgender listeners are in fact better judges of a singer's or speaker's voice type because they are better able to separate the actors' voice from their appearance, a finding that opens exciting avenues to fight more generally against implicit (or sometimes explicit) biases in voice appraisal.

13.
Trends Hear ; 27: 23312165231181757, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37338981

RESUMEN

Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.


Asunto(s)
Implantes Cocleares , Pérdida Auditiva , Percepción del Habla , Humanos , Audición , Señales (Psicología) , Pérdida Auditiva/diagnóstico
14.
Front Neurosci ; 17: 1141886, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37409105

RESUMEN

Background: Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods: Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings: Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation: Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.

15.
Clin Neurophysiol ; 149: 133-145, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36965466

RESUMEN

OBJECTIVE: Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS: High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS: While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS: Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE: This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Niño , Humanos , Estimulación Acústica , Potenciales Evocados Auditivos/fisiología , Percepción Auditiva/fisiología , Electroencefalografía
16.
J Acoust Soc Am ; 131(4): 2938-47, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22501071

RESUMEN

Two experiments investigated the ability of 17 school-aged children to process purely temporal and spectro-temporal cues that signal changes in pitch. Percentage correct was measured for the discrimination of sinusoidal amplitude modulation rate (AMR) of broadband noise in experiment 1 and for the discrimination of fundamental frequency (F0) of broadband sine-phase harmonic complexes in experiment 2. The reference AMR was 100 Hz as was the reference F0. A child-friendly interface helped listeners to remain attentive to the task. Data were fitted using a maximum-likelihood technique that extracted threshold, slope, and lapse rate. All thresholds were subsequently standardized to a common d' value equal to 0.77. There were relatively large individual differences across listeners: eight had relatively adult-like thresholds in both tasks and nine had higher thresholds. However, these individual differences did not vary systematically with age, over the span of 6-16 yr. Thresholds were correlated across the two tasks and were about nine times finer for F0 discrimination than for AMR discrimination as has been previously observed in adults.


Asunto(s)
Señales (Psicología) , Discriminación de la Altura Tonal/fisiología , Adolescente , Umbral Auditivo , Niño , Emociones/fisiología , Humanos , Ruido , Enmascaramiento Perceptual/fisiología , Acústica del Lenguaje
17.
PLoS One ; 17(12): e0278506, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36459511

RESUMEN

There is an increasing interest in the field of audiology and speech communication to measure the effort that it takes to listen in noisy environments, with obvious implications for populations suffering from hearing loss. Pupillometry offers one avenue to make progress in this enterprise but important methodological questions remain to be addressed before such tools can serve practical applications. Typically, cocktail-party situations may occur in less-than-ideal lighting conditions, e.g. a pub or a restaurant, and it is unclear how robust pupil dynamics are to luminance changes. In this study, we first used a well-known paradigm where sentences were presented at different signal-to-noise ratios (SNR), all conducive of good intelligibility. This enabled us to replicate findings, e.g. a larger and later peak pupil dilation (PPD) at adverse SNR, or when the sentences were misunderstood, and to investigate the dependency of the PPD on sentence duration. A second experiment reiterated two of the SNR levels, 0 and +14 dB, but measured at 0, 75, and 220 lux. The results showed that the impact of luminance on the SNR effect was non-monotonic (sub-optimal in darkness or in bright light), and as such, there is no trivial way to derive pupillary metrics that are robust to differences in background light, posing considerable constraints for applications of pupillometry in daily life. Our findings raise an under-examined but crucial issue when designing and understanding listening effort studies using pupillometry, and offer important insights to future clinical application of pupillometry across sites.


Asunto(s)
Pupila , Habla , Cognición , Percepción Auditiva , Relación Señal-Ruido
18.
Trends Hear ; 26: 23312165221120017, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35983700

RESUMEN

Cochlear implant (CI) users commonly report degraded musical sound quality. To improve CI-mediated music perception and enjoyment, we must understand factors that affect sound quality. In the present study, we utilize frequency response manipulation (FRM), a process that adjusts the energies of frequency bands within an audio signal, to determine its impact on CI-user sound quality assessments of musical stimuli. Thirty-three adult CI users completed an online study and listened to FRM-altered clips derived from the top songs in Billboard magazine. Participants assessed sound quality using the MUltiple Stimulus with Hidden Reference and Anchor for CI users (CI-MUSHRA) rating scale. FRM affected sound quality ratings (SQR). Specifically, increasing the gain for low and mid-range frequencies led to higher quality ratings than reducing them. In contrast, manipulating the gain for high frequencies (those above 2 kHz) had no impact. Participants with musical training were more sensitive to FRM than non-musically trained participants and demonstrated preference for gain increases over reductions. These findings suggest that, even among CI users, past musical training provides listeners with subtleties in musical appraisal, even though their hearing is now mediated electrically and bears little resemblance to their musical experience prior to implantation. Increased gain below 2 kHz may lead to higher sound quality than for equivalent reductions, perhaps because it offers greater access to lyrics in songs or because it provides more salient beat sensations.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Música , Adulto , Percepción Auditiva/fisiología , Humanos , Sonido
19.
Front Neurosci ; 16: 879583, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35692416

RESUMEN

Individuals with misophonia, a disorder involving extreme sound sensitivity, report significant anger, disgust, and anxiety in response to select but usually common sounds. While estimates of prevalence within certain populations such as college students have approached 20%, it is currently unknown what percentage of people experience misophonic responses to such "trigger" sounds. Furthermore, there is little understanding of the fundamental processes involved. In this study, we aimed to characterize the distribution of misophonic symptoms in a general population, as well as clarify whether the aversive emotional responses to trigger sounds are partly caused by acoustic salience of the sound itself, or by recognition of the sound. Using multi-talker babble as masking noise to decrease participants' ability to identify sounds, we assessed how identification of common trigger sounds related to subjective emotional responses in 300 adults who participated in an online study. Participants were asked to listen to and identify neutral, unpleasant and trigger sounds embedded in different levels of the masking noise (signal-to-noise ratios: -30, -20, -10, 0, +10 dB), and then to evaluate their subjective judgment of the sounds (pleasantness) and emotional reactions to them (anxiety, anger, and disgust). Using participants' scores on a scale quantifying misophonia sensitivity, we selected the top and bottom 20% scorers from the distribution to form a Most-Misophonic subgroup (N = 66) and Least-Misophonic subgroup (N = 68). Both groups were better at identifying triggers than unpleasant sounds, which themselves were identified better than neutral sounds. Both groups also recognized the aversiveness of the unpleasant and trigger sounds, yet for the Most-Misophonic group, there was a greater increase in subjective ratings of negative emotions once the sounds became identifiable, especially for trigger sounds. These results highlight the heightened salience of trigger sounds, but furthermore suggest that learning and higher-order evaluation of sounds play an important role in misophonia.

20.
Laryngoscope Investig Otolaryngol ; 7(1): 250-258, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35155805

RESUMEN

OBJECTIVES: To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross-culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. METHODS: This cohort study enrolled 60 cochlear-implanted (cCI) Mandarin-speaking, school-aged children who underwent cochlear implantation before 5 years of age and 53 normal-hearing children (cNH) in Taiwan. The emotion recognition and the sensitivity of fundamental frequency (F0) changes for those school-aged cNH and cCI (6-17 years old) were examined in a tertiary referred center. RESULTS: The mean emotion recognition score of the cNH group was significantly better than the cCI. Female speakers' vocal emotions are more easily to be recognized than male speakers' emotion. There was a significant effect of age at test on voice recognition performance. The average score of cCI with full-spectrum speech was close to the average score of cNH with eight-channel narrowband vocoder speech. The average performance of voice emotion recognition across speakers for cCI could be predicted by their sensitivity to changes in F0. CONCLUSIONS: Better pitch discrimination ability comes with better voice emotion recognition for Mandarin-speaking cCI. Besides the F0 cues, cCI are likely to adapt their voice emotion recognition by relying more on secondary cues such as intensity and duration. Although cross-culture differences exist for the acoustic features of voice emotion, Mandarin-speaking cCI and their English-speaking cCI peer expressed a positive effect for age at test on emotion recognition, suggesting the learning effect and brain plasticity. Therefore, further device/processor development to improve presentation of pitch information and more rehabilitative efforts are needed to improve the transmission and perception of voice emotion in Mandarin. LEVEL OF EVIDENCE: 3.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA