Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.104
Filtrar
1.
Digit Health ; 10: 20552076241271834, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139187

RESUMO

Objective: This study investigated the effectiveness of remote administration of speech audiometry, an essential tool for diagnosing hearing loss and determining its severity. Utilizing two software tools for remote testing, the research aimed to compare these digital methods with traditional, in-person speech audiometry to evaluate their feasibility and accuracy. Design: Participants underwent the Cantonese Hearing in Noise Test (CHINT) under three listening conditions-quiet, noise from the front, and noise from the right side-using three different administration methods: the conventional in-person approach, video conferencing software, and remote access software. Study Sample: Fifty-six Cantonese-speaking adults residing in Hong Kong participated in this study. Results: Analysis revealed no significant differences in CHINT scores among the three administration methods, indicating the potential for remote administration to yield results comparable to those of conventional methods. Conclusions: The findings supported the feasibility of remote speech audiometry using the investigated digital tools. This study paved the way for the wider adoption of tele-audiology practices, particularly in situations where in-person assessments are not possible.

2.
Cognition ; 251: 105909, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39111075

RESUMO

Vowelless words are exceptionally typologically rare, though they are found in some languages, such as Tashlhiyt (e.g., fkt 'give it'). The current study tests whether lexicons containing tri-segmental (CCC) vowelless words are more difficult to acquire than lexicons not containing vowelless words by adult English speakers from brief auditory exposure. The role of acoustic-phonetic form on learning these typologically rare word forms is also explored: In Experiment 1, participants were trained on words produced in either only Clear speech or Casual speech productions of words; Experiment 2 trained participants on lexical items produced in both speech styles. Listeners were able to learn both vowelless and voweled lexicons equally well when speaking style was consistent for participants, but learning was lower for vowelless lexicons when training consisted of variable acoustic-phonetic forms. In both experiments, responses to a post-training wordlikeness ratings task containing novel items revealed that exposure to a vowelless lexicon leads participants to accept new vowelless words as acceptable lexical forms. These results demonstrate that one of the typologically rarest types of lexical forms - words without vowels - can be rapidly acquired by naive adult listeners. Yet, acoustic-phonetic variation modulates learning.


Assuntos
Aprendizagem , Percepção da Fala , Humanos , Adulto , Percepção da Fala/fisiologia , Masculino , Feminino , Aprendizagem/fisiologia , Adulto Jovem , Fonética , Idioma
3.
Lang Speech ; : 238309241270741, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39189455

RESUMO

We sought to examine the contribution of visual cues, such as lipreading, in the identification of familiar (words) and unfamiliar (phonemes) words in terms of percent accuracy. For that purpose, in this retrospective study, we presented lists of words and phonemes (adult female healthy voice) in auditory (A) and audiovisual (AV) modalities to 65 Spanish normal-hearing male and female listeners classified in four age groups. Our results showed a remarkable benefit of AV information in word and phoneme recognition. Regarding gender, women exhibited better performance than men in both A and AV modalities, although we only found significant differences for words but not for phonemes. Concerning age, significant differences were detected in word recognition in the A modality between the youngest (18-29 years old) and oldest (⩾50 years old) groups only. We conclude visual information enhances word and phoneme recognition and women are more influenced by visual signals than men in AV speech perception. On the contrary, it seems that, overall, age is not a limiting factor for word recognition, with no significant differences observed in the AV modality.

4.
Braz J Otorhinolaryngol ; 90(6): 101487, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39205366

RESUMO

OBJECTIVE: To analyze the Benefit of Modulated Masking (BMM) on hearing in young, adult and elderly normal-hearing individuals. METHODS: The sample included 60 normal-hearing individuals aged 18-75 years who underwent behavioral assessment (sentence recognition test in the presence of steady and modulated noise) and electrophysiological assessment (cortical Auditory Evoked Potential) to investigate BMM. The results were analyzed comparatively using the paired t-test and ANOVA for repeated measures, followed by the Bonferroni post-hoc test (p-value < 0.05). RESULTS: A decrease in latencies and an increase in amplitudes of cortical components (P1-N1-P2) was observed due to noise modulation in all age groups. Modulated noise generated better auditory threshold responses (electrophysiological and behavioral), compared to steady noise. The elderly presented a higher threshold in both hearing domains, compared to the other participants, as well as a lower BMM magnitude. CONCLUSION: It was possible to conclude that the modulated noise generated less interference in the magnitude of the neural response (smaller latencies) and in the neural processing time (larger amplitudes) for the speech stimulus in all participants. The higher auditory thresholds (electrophysiological and behavioral) and the lower BMM magnitude observed in the elderly group, even in the face of noise modulation, suggest a lower temporal auditory performance in this population, and may indicate a deficit in the temporal resolution capacity, associated with the process of aging.

5.
Int J Audiol ; : 1-9, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39109478

RESUMO

OBJECTIVE: Developments in smartphone technology and the COVID-19 pandemic have highlighted the feasibility and need for remote, but reliable hearing tests. Previous studies used remote testing but did not directly compare results in the same listeners with standard lab or clinic testing. This study investigated validity and reliability of remote, self-administered digits-in-noise (remote-DIN) compared with lab-based, supervised (lab-DIN) testing. Predictive validity was further examined in relation to a commonly used self-report, Speech, Spatial, and Qualities of Hearing (SSQ-12), and lab-based, pure tone audiometry. DESIGN: DIN speech reception thresholds (SRTs) of adults (18-64 y/o) with normal hearing (NH, N = 16) and hearing loss (HL, N = 18), were measured using English-language digits (0-9), binaurally presented as triplets in one of four speech-shaped noise maskers (broadband, low-pass filtered at 2, 4, 8 kHz) and two phases (diotic, antiphasic). RESULTS: High, significant intraclass correlation coefficients indicated strong internal consistency of remote-DIN SRTs, which also correlated significantly with lab-DIN SRTs. There was no significant mean difference between remote- and lab-DIN on any tests. NH listeners had significantly higher SSQ scores and remote- and lab-DIN SRTs than listeners with HL. All versions of remote-DIN SRTs correlated significantly with pure-tone-average (PTA), with the 2-kHz filtered test being the best predictor, explaining 50% of the variance in PTA. SSQ total score also significantly and independently predicted PTA (17% of variance) and all test versions of the remote-DIN, except the antiphasic BB test. CONCLUSIONS: This study underscores the effectiveness of remote DIN test and SSQ-12 in assessing auditory function. These findings suggest the potential for wider access to reliable hearing assessment, particularly in remote or underserved communities.

6.
Indian J Otolaryngol Head Neck Surg ; 76(4): 3283-3288, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39130235

RESUMO

Introduction: Central auditory processing disorder (CAPD) refers to difficulties in processing audible signals not attributable to impaired hearing sensitivity or mental impairment. The demographic characteristics of pediatric CAPD and its prevalence are still debatable. Due to varied definitions and differences in the diagnostic criteria for CAPD, the approximate prevalence measure varies from 0.5 to 7% of the population. Thus, a retrospective study on prevalence in individuals with CAPD was conducted. Method: A total of 3537 cases with ear-related problems were reported to Audiology OPD at All India Institute of Speech and Hearing from June 2017 to July 2019 between the age range of 6-18 years. Of these, 32 cases were diagnosed with CAPD, and this data was available for review. Results: The prevalence of individuals with CAPD reported in this duration was 0.7%. The results also revealed that the prevalence was higher among males and individuals of lower socio-economic status. Their significant symptoms were poor academic performance and difficulty following commands or instructions. The data also revealed that speech perception in noise was the most affected process, followed by binaural integration in these individuals. Conclusion: The study provides insight into the vulnerable population who can get CAPD (e.g., children and males or people from lower socio-economic backgrounds).

7.
Indian J Otolaryngol Head Neck Surg ; 76(4): 3031-3036, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39130326

RESUMO

Purpose: One of the most obvious functional effects of aging on the cognitive and processing processes of spatial hearing is the localization problem and the disorder of speech perception in noise. The purpose of the present study is to investigate the performance of dynamic spatial auditory processing in the elderly. Methods: This descriptive and analytical study was conducted on 60 young participants aged from18 to 25 years old and 60 elderly participants aged from 60 to 75 old years, using speech, spatial, and qualities of hearing scale (SSQ) questionnaire, binaural masking level difference (BMLD) and dynamic quick speech in noise (DS-QSIN) tests. Results: Comparing the average scores of the tests and the questionnaire using the independent t test showed a significant difference between the two groups (p < 0.001). It was also found that gender had no effect on the results (p > 0.05). Conclusions: Aging is accompanied by different structural and functional changes in the auditory central nervous system, which leads to a decrease in speech perception in challenging listening environments, as well as a decrease in sound localization abilities, due to the reduction of temporal and spectral information. This problem affects the determination of the source of sound and the spatial cognition of the elderly and leads to a disturbance in the awareness of the auditory environment. Therefore, auditory rehabilitation programs can cause the improvement of spatial auditory processing performance and improve speech perception in noise in the elderly.

9.
Neurobiol Lang (Camb) ; 5(3): 757-773, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39175786

RESUMO

Over the past few decades, research into the function of the cerebellum has expanded far beyond the motor domain. A growing number of studies are probing the role of specific cerebellar subregions, such as Crus I and Crus II, in higher-order cognitive functions including receptive language processing. In the current fMRI study, we show evidence for the cerebellum's sensitivity to variation in two well-studied psycholinguistic properties of words-lexical frequency and phonological neighborhood density-during passive, continuous listening of a podcast. To determine whether, and how, activity in the cerebellum correlates with these lexical properties, we modeled each word separately using an amplitude-modulated regressor, time-locked to the onset of each word. At the group level, significant effects of both lexical properties landed in expected cerebellar subregions: Crus I and Crus II. The BOLD signal correlated with variation in each lexical property, consistent with both language-specific and domain-general mechanisms. Activation patterns at the individual level also showed that effects of phonological neighborhood and lexical frequency landed in Crus I and Crus II as the most probable sites, though there was activation seen in other lobules (especially for frequency). Although the exact cerebellar mechanisms used during speech and language processing are not yet evident, these findings highlight the cerebellum's role in word-level processing during continuous listening.

10.
Trends Hear ; 28: 23312165241266316, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39183533

RESUMO

During continuous speech perception, endogenous neural activity becomes time-locked to acoustic stimulus features, such as the speech amplitude envelope. This speech-brain coupling can be decoded using non-invasive brain imaging techniques, including electroencephalography (EEG). Neural decoding may provide clinical use as an objective measure of stimulus encoding by the brain-for example during cochlear implant listening, wherein the speech signal is severely spectrally degraded. Yet, interplay between acoustic and linguistic factors may lead to top-down modulation of perception, thereby complicating audiological applications. To address this ambiguity, we assess neural decoding of the speech envelope under spectral degradation with EEG in acoustically hearing listeners (n = 38; 18-35 years old) using vocoded speech. We dissociate sensory encoding from higher-order processing by employing intelligible (English) and non-intelligible (Dutch) stimuli, with auditory attention sustained using a repeated-phrase detection task. Subject-specific and group decoders were trained to reconstruct the speech envelope from held-out EEG data, with decoder significance determined via random permutation testing. Whereas speech envelope reconstruction did not vary by spectral resolution, intelligible speech was associated with better decoding accuracy in general. Results were similar across subject-specific and group analyses, with less consistent effects of spectral degradation in group decoding. Permutation tests revealed possible differences in decoder statistical significance by experimental condition. In general, while robust neural decoding was observed at the individual and group level, variability within participants would most likely prevent the clinical use of such a measure to differentiate levels of spectral degradation and intelligibility on an individual basis.


Assuntos
Estimulação Acústica , Eletroencefalografia , Inteligibilidade da Fala , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Masculino , Adolescente , Adulto , Adulto Jovem , Acústica da Fala , Encéfalo/fisiologia
11.
Proc Natl Acad Sci U S A ; 121(34): e2411167121, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39136991

RESUMO

Evidence accumulates that the cerebellum's role in the brain is not restricted to motor functions. Rather, cerebellar activity seems to be crucial for a variety of tasks that rely on precise event timing and prediction. Due to its complex structure and importance in communication, human speech requires a particularly precise and predictive coordination of neural processes to be successfully comprehended. Recent studies proposed that the cerebellum is indeed a major contributor to speech processing, but how this contribution is achieved mechanistically remains poorly understood. The current study aimed to reveal a mechanism underlying cortico-cerebellar coordination and demonstrate its speech-specificity. In a reanalysis of magnetoencephalography data, we found that activity in the cerebellum aligned to rhythmic sequences of noise-vocoded speech, irrespective of its intelligibility. We then tested whether these "entrained" responses persist, and how they interact with other brain regions, when a rhythmic stimulus stopped and temporal predictions had to be updated. We found that only intelligible speech produced sustained rhythmic responses in the cerebellum. During this "entrainment echo," but not during rhythmic speech itself, cerebellar activity was coupled with that in the left inferior frontal gyrus, and specifically at rates corresponding to the preceding stimulus rhythm. This finding represents evidence for specific cerebellum-driven temporal predictions in speech processing and their relay to cortical regions.


Assuntos
Cerebelo , Magnetoencefalografia , Humanos , Cerebelo/fisiologia , Masculino , Feminino , Adulto , Percepção da Fala/fisiologia , Adulto Jovem , Fala/fisiologia , Inteligibilidade da Fala/fisiologia
12.
J Clin Med ; 13(16)2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39200929

RESUMO

Background/Objectives: Autism spectrum disorder (ASD) is a lifelong neurodevelopmental condition characterised by impairments in social communication, sensory abnormalities, and attentional deficits. Children with ASD often face significant challenges with speech perception and auditory attention, particularly in noisy environments. This study aimed to assess the effectiveness of noise cancelling Bluetooth earbuds (Nuheara IQbuds Boost) in improving speech perception and auditory attention in children with ASD. Methods: Thirteen children aged 6-13 years diagnosed with ASD participated. Pure tone audiometry confirmed normal hearing levels. Speech perception in noise was measured using the Consonant-Nucleus-Consonant-Word test, and auditory/visual attention was evaluated via the Integrated Visual and Auditory Continuous Performance Task. Participants completed these assessments both with and without the IQbuds in situ. A two-week device trial evaluated classroom listening and communication improvements using the Listening Inventory for Education-Revised (teacher version) questionnaire. Results: Speech perception in noise was significantly poorer for the ASD group compared to typically developing peers and did not change with the IQbuds. Auditory attention, however, significantly improved when the children were using the earbuds. Additionally, classroom listening and communication improved significantly after the two-week device trial. Conclusions: While the noise cancelling earbuds did not enhance speech perception in noise for children with ASD, they significantly improved auditory attention and classroom listening behaviours. These findings suggest that Bluetooth earbuds could be a viable alternative to remote microphone systems for enhancing auditory attention in children with ASD, offering benefits in classroom settings and potentially minimising the stigma associated with traditional assistive listening devices.

13.
Psychon Bull Rev ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39112905

RESUMO

Adults struggle to learn non-native speech categories in many experimental settings (Goto, Neuropsychologia, 9(3), 317-323 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim & Holt, Cognitive Science, 35(7), 1390-1405 2011). Behavioral and neural evidence from this and other paradigms point toward the involvement of reinforcement learning mechanisms in speech category learning (Harmon, Idemaru, & Kapatsinski, Cognition, 189, 76-88 2019; Lim, Fiez, & Holt, Proceedings of the National Academy of Sciences, 116, 201811992 2019). We formalize this hypothesis computationally and implement a deep reinforcement learning network to map between environmental input and actions. Comparing to a supervised model of learning, we show that the reinforcement network closely matches aspects of human behavior in two experiments - learning of synthesized auditory noise tokens and improvement in speech sound discrimination. Both models perform comparably and the similarity in the output of each model leads us to believe that there is little inherent computational benefit to a reward-based learning mechanism. We suggest that the specific neural circuitry engaged by the paradigm and links between striatum and superior temporal areas play a critical role in effective learning.

14.
Sci Rep ; 14(1): 17524, 2024 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-39080361

RESUMO

This study aims to analyse the volumetric changes in brain MRI after cochlear implantation (CI), focusing on the speech perception in postlingually deaf adults. We conducted a prospective cohort study with 16 patients who had bilateral hearing loss and received unilateral CI. Based on the surgical side, patients were categorized into left and right CI groups. Volumetric T1-weighted brain MRI were obtained before and one year after the surgery. To overcome the artifact caused by the internal device in post-CI scan, image reconstruction method was newly devised and applied using the contralateral hemisphere of the pre-CI MRI data, to run FreeSurfer. We conducted within-subject template estimation for unbiased longitudinal image analysis, based on the linear mixed effect models. When analyzing the contralateral cerebral hemisphere before and after CI, a substantial increase in superior frontal gyrus and superior temporal gyrus (STG) volumes was observed in the left CI group. A positive correlation was observed in the STG and post-CI word recognition score in both groups. As far as we know, this is the first study attempting longitudinal brain volumetry based on post-CI MRI scans. We demonstrate that better auditory performance after CI is associated with structural restoration in central auditory structures.


Assuntos
Implante Coclear , Surdez , Imageamento por Ressonância Magnética , Percepção da Fala , Humanos , Masculino , Feminino , Implante Coclear/métodos , Percepção da Fala/fisiologia , Imageamento por Ressonância Magnética/métodos , Surdez/fisiopatologia , Surdez/cirurgia , Surdez/diagnóstico por imagem , Adulto , Pessoa de Meia-Idade , Estudos Prospectivos , Idoso , Implantes Cocleares
15.
Clin Linguist Phon ; : 1-19, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38965823

RESUMO

This study explores the influence of lexicality on gradient judgments of Swedish sibilant fricatives by contrasting ratings of initial fricatives in words and word fragments (initial CV-syllables). Visual-Analogue Scale (VAS) judgments were elicited from experienced listeners (speech-language pathologists; SLPs) and inexperienced listeners, and compared with respect to the effects of lexicality using Bayesian mixed-effects beta regression. Overall, SLPs had higher intra- and interrater reliability than inexperienced listeners. SLPs as a group also rated fricatives as more target-like, with higher precision, than did inexperienced listeners. An effect of lexicality was observed for all individual listeners, though the magnitude of the effect varied. Although SLP's ratings of Swedish children's initial voiceless fricatives were less influenced by lexicality, our results indicate that previous findings concerning VAS ratings of non-lexical CV-syllables cannot be directly transferred to the clinical context, without consideration of possible lexical bias.

16.
Infant Behav Dev ; 76: 101977, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-39002494

RESUMO

Language development during the 1st year of life is characterized by perceptual attunement: following language-general perception, a decline in the perception of non-native phonemes and a parallel increase in or maintenance of the perception of native phonemes. While this general pattern is well established, there are still many gaps in the literature. First, most evidence documenting these patterns comes from "Minority world countries" with only a limited number of studies from "Majority world countries", limiting the range of languages and contrasts assessed. Second, few studies test both the developmental patterns of native and non-native speech perception in the same group of infants, making it hard to draw conclusions on simultaneous decline in non-native and increase in native speech perception. Such limitations are in part due to the effort that goes into testing developing speech sound perception, where usually only discrimination of one contrast per infant can be tested at a time. The present study thus set out to assess the feasibility of assessing a given infant on their discrimination of two speech sound contrasts during the same lab visit. It leveraged the presence of documented patterns of the improvement of native and the decline of non-native phoneme discrimination abilities in Japanese, therefore assessing native and non-native speech perception in Japanese infants from 6 to 12 months of age. Results demonstrated that 76 % of infants contributed discrimination data for both contrasts. We found a decline in non-native speech perception evident in discrimination of the non-native /ɹ/-/l/ consonant contrast at 9-11, but not at 11-13 months of age. Additionally, a parallel increase in native speech perception was demonstrated evident in an absence of native phonemic vowel length discrimination at 6-7 and 9-11 months and a discrimination of this contrast at 11-13 months of age. These results, based on a simultaneous assessment of native and non-native speech perception in Japanese-learning infants, demonstrate the feasibility of assessing the discrimination of two contrasts in one testing session and corroborate theoretical proposals on two hallmarks of perceptual attunement: a decrease in non-native and a facilitation in native speech perception during the first year of life.

17.
Int Arch Otorhinolaryngol ; 28(3): e492-e501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38974629

RESUMO

Introduction The limited access to temporal fine structure (TFS) cues is a reason for reduced speech-in-noise recognition in cochlear implant (CI) users. The CI signal processing schemes like electroacoustic stimulation (EAS) and fine structure processing (FSP) encode TFS in the low frequency whereas theoretical strategies such as frequency amplitude modulation encoder (FAME) encode TFS in all the bands. Objective The present study compared the effect of simulated CI signal processing schemes that either encode no TFS, TFS information in all bands, or TFS only in low-frequency bands on concurrent vowel identification (CVI) and Zebra speech perception (ZSP). Methods Temporal fine structure information was systematically manipulated using a 30-band sine-wave (SV) vocoder. The TFS was either absent (SV) or presented in all the bands as frequency modulations simulating the FAME algorithm or only in bands below 525 Hz to simulate EAS. Concurrent vowel identification and ZSP were measured under each condition in 15 adults with normal hearing. Results The CVI scores did not differ between the 3 schemes (F (2, 28) = 0.62, p = 0.55, η 2 p = 0.04). The effect of encoding TFS was observed for ZSP (F (2, 28) = 5.73, p = 0.008, η 2 p = 0.29). Perception of Zebra speech was significantly better with EAS and FAME than with SV. There was no significant difference in ZSP scores obtained with EAS and FAME ( p = 1.00) Conclusion For ZSP, the TFS cues from FAME and EAS resulted in equivalent improvements in performance compared to the SV scheme. The presence or absence of TFS did not affect the CVI scores.

18.
Disabil Rehabil Assist Technol ; : 1-7, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976231

RESUMO

Purpose: The study examined the benefits of transparent versus non-transparent surgical masks on the speech intelligibility in quiet of adult cochlear implant (CI) users, in conjunction with patient preferences and the acoustic effects of the different masks on the speech signal.Methods: Speech tracking test (STT) scores and acoustical characteristics were measured in quiet for live speech in three different conditions, without mask, with a non-transparent surgical mask and with a transparent surgical mask. Patients were asked about their experience with the face masks. The study sample consists of 30 patients using a cochlear implant.Results: We found a significant difference in speech perception among all conditions, with the speech tracking scores revealing a significant advantage when switching from the non-transparent surgical mask to the transparent one. The transparent surgical mask, although it does not transmit high frequencies effectively, seems to have minimal effect on speech comprehension in practice when lip movements are visible. This substantial benefit is further emphasized in the questionnaire, where 82% of the patients express a preference for the transparent surgical mask.Conclusion: The study highlights significant benefits for patients in speech intelligibility in quiet with the use of medically safe transparent facemasks. Transitioning from standard surgical masks to transparent masks demonstrates highly significant effectiveness and patient satisfaction for patients with hearing loss. This research strongly advocates for the implementation of transparent masks in broader hospital and perioperative settings.


In scenarios mandating mask usage, it's advisable for caregivers to opt for transparent surgical masks. Specifically within perioperative settings, where patients might not be able to utilise their hearing aids or cochlear implants, it becomes imperative for all caregivers to consistently wear transparent surgical masks to prevent communication impediments.When utilising a transparent surgical mask, caregivers must recognise that sound may be altered and maintaining a clear view of the face and lips is crucial for effective communication.

19.
Dev Sci ; : e13551, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39036879

RESUMO

Test-retest reliability-establishing that measurements remain consistent across multiple testing sessions-is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants' preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants' speech preference (overall r = 0.09, 95% CI [-0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. RESEARCH HIGHLIGHTS: We assessed test-retest reliability of infants' preference for infant-directed over adult-directed speech in a large pre-registered sample (N = 158). There was no consistent evidence of test-retest reliability in measures of infants' speech preference. Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size. Developmental research relying on stable individual differences should consider the underlying reliability of its measures.

20.
Artigo em Inglês | MEDLINE | ID: mdl-39060407

RESUMO

PURPOSE: Tinnitus is a condition that causes people to hear sounds without an external source. One significant issue arising from this condition is the difficulty in communicating, especially in the presence of noisy backgrounds. The process of understanding speech in challenging situations requires both cognitive and auditory abilities. Since tinnitus presents unique challenges, it is important to investigate how it affects speech perception in noise. METHOD: In this review, 32 articles were investigated to determine the effect of tinnitus on the effect of speech in noise perception performance. Based on the meta-analysis performed using a random-effects model, meta-regression was used to explore the moderating effects of age and hearing acuity. RESULTS: A total of 32 studies were reviewed, and the results of the meta-analysis revealed that tinnitus significantly impacts speech in terms of noise perception performance. Additionally, the regression analysis revealed that age and hearing acuity are not significant predictors of speech in noise perception. CONCLUSION: Our findings suggest that tinnitus affects speech perception in noisy environments due to cognitive impairments and central auditory processing deficits. Hearing loss and aging also contribute to reduced speech in noise performance. Interventions and further research are necessary to address individual challenges associated with continuous subjective tinnitus.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA