Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Neurobiol Learn Mem ; 207: 107869, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38042330

RESUMO

The orbitofrontal cortex (OFC) is often proposed to function as a value integrator; however, alternative accounts focus on its role in representing associative structures that specify the probability and sensory identity of future outcomes. These two accounts make different predictions about how this area should respond to conditioned inhibitors of reward, since in the former, neural activity should reflect the negative value of the inhibitor, whereas in the latter, it should track the estimated probability of a future reward based on all cues present. Here, we assessed these predictions by recording from small groups of neurons in the lateral OFC of rats during training in a conditioned inhibition design. Rats showed negative summation when the inhibitor was compounded with a novel excitor, suggesting that they learned to respond to the conditioned inhibitor appropriately. Against this backdrop, we found unit and population responses that scaled with expected reward value on excitor + inhibitor compound trials. However, the responses of these neurons did not differentiate between the conditioned inhibitor and a neutral cue when both were presented in isolation. Further, when the ensemble patterns were analyzed, activity to the conditioned inhibitor did not classify according to putative negative value. Instead, it classified with a same-modality neutral cue when presented alone and as a unique item when presented in compound with a novel excitor. This pattern of results supports the notion that OFC encodes a model of the causal structure of the environment rather than either the modality or the value of cues.


Assuntos
Condicionamento Clássico , Neurônios , Ratos , Animais , Neurônios/fisiologia , Condicionamento Clássico/fisiologia , Córtex Pré-Frontal/fisiologia , Aprendizagem , Recompensa , Sinais (Psicologia)
2.
Ear Hear ; 43(3): 862-873, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34812791

RESUMO

OBJECTIVES: Variations in loudness are a fundamental component of the music listening experience. Cochlear implant (CI) processing, including amplitude compression, and a degraded auditory system may further degrade these loudness cues and decrease the enjoyment of music listening. This study aimed to identify optimal CI sound processor compression settings to improve music sound quality for CI users. DESIGN: Fourteen adult MED-EL CI recipients participated (Experiment No. 1: n = 17 ears; Experiment No. 2: n = 11 ears) in the study. A software application using a modified comparison category rating (CCR) test method allowed participants to compare and rate the sound quality of various CI compression settings while listening to 25 real-world music clips. The two compression settings studied were (1) Maplaw, which informs audibility and compression of soft level sounds, and (2) automatic gain control (AGC), which applies compression to loud sounds. For each experiment, one compression setting (Maplaw or AGC) was held at the default, while the other was varied according to the values available in the clinical CI programming software. Experiment No. 1 compared Maplaw settings of 500, 1000 (default), and 2000. Experiment No. 2 compared AGC settings of 2.5:1, 3:1 (default), and 3.5:1. RESULTS: In Experiment No. 1, the group preferred a higher Maplaw setting of 2000 over the default Maplaw setting of 1000 (p = 0.003) for music listening. There was no significant difference in music sound quality between the Maplaw setting of 500 and the default setting (p = 0.278). In Experiment No. 2, a main effect of AGC setting was found; however, no significant difference in sound quality ratings for pairwise comparisons were found between the experimental settings and the default setting (2.5:1 versus 3:1 at p = 0.546; 3.5:1 versus 3:1 at p = 0.059). CONCLUSIONS: CI users reported improvements in music sound quality with higher than default Maplaw or AGC settings. Thus, participants preferred slightly higher compression for music listening, with results having clinical implications for improving music perception in CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Música , Adulto , Percepção Auditiva , Surdez/reabilitação , Humanos , Som
3.
Ear Hear ; 41(5): 1372-1382, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32149924

RESUMO

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adolescente , Adulto , Criança , Emoções , Feminino , Humanos , Masculino , Fala , Adulto Jovem
4.
Ear Hear ; 40(5): 1197-1209, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30762600

RESUMO

OBJECTIVE: Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN: Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS: Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS: The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.


Assuntos
Surdez/reabilitação , Emoções , Potenciais Evocados Auditivos/fisiologia , Música , Percepção da Fala , Adolescente , Adulto , Idoso , Implantes Cocleares , Surdez/fisiopatologia , Surdez/psicologia , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção , Voz , Adulto Jovem
5.
J Acoust Soc Am ; 145(2): 847, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30823786

RESUMO

In cocktail-party situations, listeners can use the fundamental frequency (F0) of a voice to segregate it from competitors, but other cues in speech could help, such as co-modulation of envelopes across frequency or more complex cues related to the semantic/syntactic content of the utterances. For simplicity, this (non-pitch) form of grouping is referred to as "articulatory." By creating a new type of speech with two steady F0s, it was examined how these two forms of segregation compete: articulatory grouping would bind the partials of a double-F0 source together, whereas harmonic segregation would tend to split them in two subsets. In experiment 1, maskers were two same-male sentences. Speech reception thresholds were high in this task (vicinity of 0 dB), and harmonic segregation behaved as though double-F0 stimuli were two independent sources. This was not the case in experiment 2, where maskers were speech-shaped complexes (buzzes). First, double-F0 targets were immune to the masking of a single-F0 buzz matching one of the two target F0s. Second, double-F0 buzzes were particularly effective at masking a single-F0 target matching one of the two buzz F0s. As a conclusion, the strength of F0-segregation appears to depend on whether the masker is speech or not.

6.
J Acoust Soc Am ; 142(4): 1739, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092612

RESUMO

Musicians can sometimes achieve better speech recognition in noisy backgrounds than non-musicians, a phenomenon referred to as the "musician advantage effect." In addition, musicians are known to possess a finer sense of pitch than non-musicians. The present study examined the hypothesis that the latter fact could explain the former. Four experiments measured speech reception threshold for a target voice against speech or non-speech maskers. Although differences in fundamental frequency (ΔF0s) were shown to be beneficial even when presented to opposite ears (experiment 1), the authors' attempt to maximize their use by directing the listener's attention to the target F0 led to unexpected impairments (experiment 2) and the authors' attempt to hinder their use by generating uncertainty about the competing F0s led to practically negligible effects (experiments 3 and 4). The benefits drawn from ΔF0s showed surprisingly little malleability for a cue that can be used in the complete absence of energetic masking. In half of the experiments, musicians obtained better thresholds than non-musicians, particularly in speech-on-speech conditions, but they did not reliably obtain larger ΔF0 benefits. Thus, the data do not support the hypothesis that the musician advantage effect is based on greater ability to exploit ΔF0s.


Assuntos
Sinais (Psicologia) , Música , Ocupações , Percepção da Altura Sonora , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Adulto , Atenção , Feminino , Humanos , Masculino , Ruído/efeitos adversos , Mascaramento Perceptivo , Discriminação da Altura Tonal , Reconhecimento Psicológico , Teste do Limiar de Recepção da Fala , Adulto Jovem
7.
J Acoust Soc Am ; 136(5): 2726-36, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25373972

RESUMO

When phase relationships between partials of a complex masker produce highly modulated temporal envelopes on the basilar membrane, listeners may detect speech information from temporal dips in the within-channel masker envelopes. This source of masking release (MR) is however located in regions of unresolved masker partials and it is unclear how much of the speech information in these regions is really needed for intelligibility. Also, other sources of MR such as glimpsing in between resolved masker partials may provide sufficient information from regions that disregard phase relationships. This study simplified the problem of speech recognition to a masked detection task. Target bands of speech-shaped noise were restricted to frequency regions containing either only resolved or only unresolved masker partials, as a function of masker phase relationships (sine or random), masker fundamental frequency (F0) (50, 100, or 200 Hz), and masker spectral profile (flat-spectrum or speech-shaped). Although masker phase effects could be observed in unresolved regions at F0s of 50 and 100 Hz, it was only at 50-Hz F0 that detection thresholds were ever lower in unresolved than in resolved regions, suggesting little role of envelope modulations for harmonic complexes with F0s in the human voice range and at moderate level.


Assuntos
Ruído , Mascaramento Perceptivo/fisiologia , Fonética , Inteligibilidade da Fala , Adulto , Limiar Diferencial , Humanos , Modelos Teóricos , Reconhecimento Fisiológico de Modelo , Periodicidade , Som , Acústica da Fala , Adulto Jovem
8.
J Acoust Soc Am ; 136(3): 1225, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25190396

RESUMO

Intelligibility of a target voice improves when its fundamental frequency (F0) differs from that of a masking voice, but it remains unclear how this masking release (MR) depends on the two relative F0s. Three experiments measured speech reception thresholds (SRTs) for a target voice against different maskers. Experiment 1 evaluated the influence of target F0 itself. SRTs against white noise were elevated by at least 2 dB for a monotonized target voice compared with the unprocessed voice, but SRTs differed little for F0s between 50 and 150 Hz. In experiments 2 and 3, a MR occurred when there was a steady difference in F0 between the target voice and a stationary speech-shaped harmonic complex or a babble. However, this MR was considerably larger when the F0 of the masker was 11 semitones above the target F0 than when it was 11 semitones below. In contrast, for a fixed masker F0, the MR was similar whether the target F0 was above or below. The dependency of these MRs on the masker F0 suggests that a spectral mechanism such as glimpsing in between resolved masker partials may account for an important part of this phenomenon.


Assuntos
Ruído/efeitos adversos , Mascaramento Perceptivo , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Limiar Auditivo , Humanos , Pessoa de Meia-Idade , Teste do Limiar de Recepção da Fala , Fatores de Tempo , Adulto Jovem
9.
J Acoust Soc Am ; 135(5): 2873-84, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24815268

RESUMO

Speech recognition in a complex masker usually benefits from masker harmonicity, but there are several factors at work. The present study focused on two of them, glimpsing spectrally in between masker partials and periodicity within individual frequency channels. Using both a theoretical and an experimental approach, it is demonstrated that when inharmonic complexes are generated by jittering partials from their harmonic positions, there are better opportunities for spectral glimpsing in inharmonic than in harmonic maskers, and this difference is enhanced as fundamental frequency (F0) increases. As a result, measurements of masking level difference between the two maskers can be reduced, particularly at higher F0s. Using inharmonic maskers that offer similar glimpsing opportunity to harmonic maskers, it was found that the masking level difference between the two maskers varied little with F0, was influenced by periodicity of the first four partials, and could occur in low-, mid-, or high-frequency regions. Overall, the present results suggested that both spectral glimpsing and periodicity contribute to speech recognition under masking by harmonic complexes, and these effects seem independent from one another.


Assuntos
Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Limiar Auditivo , Humanos , Pessoa de Meia-Idade , Periodicidade , Fonética , Espectrografia do Som , Inteligibilidade da Fala , Adulto Jovem
10.
Percept Mot Skills ; 131(1): 74-105, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37977135

RESUMO

Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Criança , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Implante Coclear/métodos , Surdez/cirurgia , Hemoglobinas
11.
Brain Commun ; 6(3): fcae175, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846536

RESUMO

Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.

12.
J Acoust Soc Am ; 134(5): EL465-70, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24181992

RESUMO

Speech reception thresholds were measured for a voice against two different maskers: Either two concurrent voices with the same fundamental frequency (F0) or a harmonic complex with the same long-term excitation pattern and broadband temporal envelope as the masking sentences (speech-modulated buzz). All sources had steady F0s. A difference in F0 of 2 or 8 semitones provided a 5-dB benefit for buzz maskers, whereas it provided a 3- and 8-dB benefit, respectively, for masking sentences. Whether intelligibility of a voice increases abruptly with small ΔF0s or gradually toward larger ΔF0s seems to depend on the nature of the masker.


Assuntos
Ruído/efeitos adversos , Reconhecimento Fisiológico de Modelo , Mascaramento Perceptivo , Percepção da Altura Sonora , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Limiar Auditivo , Sinais (Psicologia) , Humanos , Masculino , Reconhecimento Psicológico , Teste do Limiar de Recepção da Fala , Adulto Jovem
13.
J Voice ; 37(3): 466.e1-466.e15, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-33745802

RESUMO

OBJECTIVE: Using voice to speak or to sing is made possible by remarkably complex sensorimotor processes. Like any other sensorimotor system, the speech motor controller guides its actions with maximum performance at minimum cost, using available sources of information, among which, auditory feedback plays a major role. Manipulation of this feedback forces the speech monitoring system to refine its expectations for further actions. The present study hypothesizes that the duration of this refinement and the weight applied on different feedbacks loops would depend on the intended sounds to be produced, namely reading aloud versus singing. MATERIAL AND METHODS: We asked participants to sing "Happy Birthday" and read a paragraph of Harry Potter before and after experiencing pitch-shifted feedback. A detailed fundamental frequency (F0) analysis was conducted for each note in the song and each segment in the paragraph (at the level of a sentence, a word, or a vowel) to determine whether some aspects of F0 production changed in response to the pitch perturbations experienced during the adaptation paradigm. RESULTS: Our results showed that changes in the degree of F0-drift across the song or the paragraph was the metric that was the most consistent with a carry-over effect of adaptation, and in this regard, reading new material was more influenced by recent remapping than singing. CONCLUSION: The motor commands used by (normally-hearing) speakers are malleable via altered-feedback paradigms, perhaps more so when reading aloud than when singing. But these effects are not revealed through simple indicators such as an overall change in mean F0 or F0 range, but rather through subtle metrics, such as a drift of the voice pitch across the recordings.


Assuntos
Canto , Voz , Humanos , Retroalimentação , Voz/fisiologia , Fala/fisiologia , Retroalimentação Sensorial/fisiologia , Percepção da Altura Sonora/fisiologia
14.
Front Psychol ; 14: 1046672, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37205083

RESUMO

Introduction: A singer's or speaker's Fach (voice type) should be appraised based on acoustic cues characterizing their voice. Instead, in practice, it is often influenced by the individual's physical appearance. This is especially distressful for transgender people who may be excluded from formal singing because of perceived mismatch between their voice and appearance. To eventually break down these visual biases, we need a better understanding of the conditions under which they occur. Specifically, we hypothesized that trans listeners (not actors) would be better able to resist such biases, relative to cis listeners, precisely because they would be more aware of appearance-voice dissociations. Methods: In an online study, 85 cisgender and 81 transgender participants were presented with 18 different actors singing or speaking short sentences. These actors covered six voice categories from high/bright (traditionally feminine) to low/dark (traditionally masculine) voices: namely soprano, mezzo-soprano (referred to henceforth as mezzo), contralto (referred to henceforth as alto), tenor, baritone, and bass. Every participant provided voice type ratings for (1) Audio-only (A) stimuli to get an unbiased estimate of a given actor's voice type, (2) Video-only (V) stimuli to get an estimate of the strength of the bias itself, and (3) combined Audio-Visual (AV) stimuli to see how much visual cues would affect the evaluation of the audio. Results: Results demonstrated that visual biases are not subtle and hold across the entire scale, shifting voice appraisal by about a third of the distance between adjacent voice types (for example, a third of the bass-to-baritone distance). This shift was 30% smaller for trans than for cis listeners, confirming our main hypothesis. This pattern was largely similar whether actors sung or spoke, though singing overall led to more feminine/high/bright ratings. Conclusion: This study is one of the first demonstrations that transgender listeners are in fact better judges of a singer's or speaker's voice type because they are better able to separate the actors' voice from their appearance, a finding that opens exciting avenues to fight more generally against implicit (or sometimes explicit) biases in voice appraisal.

15.
Trends Hear ; 27: 23312165231181757, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37338981

RESUMO

Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.


Assuntos
Implantes Cocleares , Perda Auditiva , Percepção da Fala , Humanos , Audição , Sinais (Psicologia) , Perda Auditiva/diagnóstico
16.
Front Neurosci ; 17: 1141886, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37409105

RESUMO

Background: Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods: Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings: Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation: Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.

17.
Brain Res Bull ; 205: 110817, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37989460

RESUMO

Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.


Assuntos
Implantes Cocleares , Surdez , Percepção da Fala , Criança , Humanos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Eletroencefalografia
18.
Clin Neurophysiol ; 149: 133-145, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36965466

RESUMO

OBJECTIVE: Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS: High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS: While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS: Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE: This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.


Assuntos
Implante Coclear , Implantes Cocleares , Criança , Humanos , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia
19.
J Acoust Soc Am ; 131(4): 2938-47, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22501071

RESUMO

Two experiments investigated the ability of 17 school-aged children to process purely temporal and spectro-temporal cues that signal changes in pitch. Percentage correct was measured for the discrimination of sinusoidal amplitude modulation rate (AMR) of broadband noise in experiment 1 and for the discrimination of fundamental frequency (F0) of broadband sine-phase harmonic complexes in experiment 2. The reference AMR was 100 Hz as was the reference F0. A child-friendly interface helped listeners to remain attentive to the task. Data were fitted using a maximum-likelihood technique that extracted threshold, slope, and lapse rate. All thresholds were subsequently standardized to a common d' value equal to 0.77. There were relatively large individual differences across listeners: eight had relatively adult-like thresholds in both tasks and nine had higher thresholds. However, these individual differences did not vary systematically with age, over the span of 6-16 yr. Thresholds were correlated across the two tasks and were about nine times finer for F0 discrimination than for AMR discrimination as has been previously observed in adults.


Assuntos
Sinais (Psicologia) , Discriminação da Altura Tonal/fisiologia , Adolescente , Limiar Auditivo , Criança , Emoções/fisiologia , Humanos , Ruído , Mascaramento Perceptivo/fisiologia , Acústica da Fala
20.
PLoS One ; 17(12): e0278506, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36459511

RESUMO

There is an increasing interest in the field of audiology and speech communication to measure the effort that it takes to listen in noisy environments, with obvious implications for populations suffering from hearing loss. Pupillometry offers one avenue to make progress in this enterprise but important methodological questions remain to be addressed before such tools can serve practical applications. Typically, cocktail-party situations may occur in less-than-ideal lighting conditions, e.g. a pub or a restaurant, and it is unclear how robust pupil dynamics are to luminance changes. In this study, we first used a well-known paradigm where sentences were presented at different signal-to-noise ratios (SNR), all conducive of good intelligibility. This enabled us to replicate findings, e.g. a larger and later peak pupil dilation (PPD) at adverse SNR, or when the sentences were misunderstood, and to investigate the dependency of the PPD on sentence duration. A second experiment reiterated two of the SNR levels, 0 and +14 dB, but measured at 0, 75, and 220 lux. The results showed that the impact of luminance on the SNR effect was non-monotonic (sub-optimal in darkness or in bright light), and as such, there is no trivial way to derive pupillary metrics that are robust to differences in background light, posing considerable constraints for applications of pupillometry in daily life. Our findings raise an under-examined but crucial issue when designing and understanding listening effort studies using pupillometry, and offer important insights to future clinical application of pupillometry across sites.


Assuntos
Pupila , Fala , Cognição , Percepção Auditiva , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa