Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28.376
Filtrar
1.
J Speech Lang Hear Res ; 67(4): 1020-1041, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38557114

RESUMO

PURPOSE: The purpose of this study was to identify commonalities and differences between content components in stuttering treatment programs for preschool-age children. METHOD: In this document analysis, a thematic analysis of the content was conducted of handbooks and manuals describing Early Childhood Stuttering Therapy, the Lidcombe Program, Mini-KIDS, Palin Parent-Child Interaction Therapy, RESTART Demands and Capacities Model Method, and the Westmead Program. First, a theoretical framework defining a content component in treatment was developed. Second, we coded and categorized the data following the procedure of reflexive thematic analysis. In addition, the first authors of the treatment documents have reviewed the findings in this study, and their feedback has been analyzed and taken into consideration. RESULTS: Sixty-one content components within the seven themes-interaction, coping, reactions, everyday life, information, language, and speech-were identified across the treatment programs. The content component SLP providing information about the child's stuttering was identified across all treatment programs. All programs are multithematic, and no treatment program has a single focus on speech, language, or parent-child interaction. A comparison of the programs with equal treatment goals highlighted more commonalities in content components across the programs. The differences between the treatment programs were evident in both the number of content components that varied from seven to 39 and the content included in each treatment program. CONCLUSIONS: Only one common content component was identified across programs, and the number and types of components vary widely. The role that the common content component plays in treatment effects is discussed, alongside implications for research and clinical practice. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25457929.


Assuntos
Gagueira , Humanos , Pré-Escolar , Gagueira/terapia , Fonoterapia/métodos , Análise Documental , Resultado do Tratamento , Fala
2.
J Speech Lang Hear Res ; 67(4): 1143-1164, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38568053

RESUMO

PURPOSE: Connected speech analysis has been effectively utilized for the diagnosis and disease monitoring of individuals with Alzheimer's disease (AD). Existing research has been conducted mostly in monolingual English speakers with a noticeable lack of evidence from bilinguals and non-English speakers, particularly in non-European languages. Using a case study approach, we characterized connected speech profiles of two Bengali-English bilingual speakers with AD to determine the universal features of language impairments in both languages, identify language-specific differences between the languages, and explore language impairment characteristics of the participants with AD in relation to their bilingual language experience. METHOD: Participants included two Bengali-English bilingual speakers with AD and a group of age-, gender-, education-, and language-matched neurologically healthy controls. Connected speech samples were collected in first language (L1; Bengali) and second language (L2; English) using a novel storytelling task (i.e., Frog, Where Are You?). These samples were analyzed using an augmented quantitative production analysis and correct information unit analyses for productivity, fluency, syntactic and morphosyntactic features, and lexical and semantic characteristics. RESULTS: Irrespective of the language, AD impacted speech productivity (speech rate and fluency) and semantic characteristics in both languages. Unique language-specific differences were noted on syntactic measures (reduced sentence length in Bengali), lexical distribution (fewer pronouns and absence of reduplication in Bengali), and inflectional properties (no difficulties with noun or verb inflections in Bengali). Among the two participants with AD, the individual who showed lower proficiency and usage in L2 (English) demonstrated reduced syntactic complexity and morphosyntactic richness in English. CONCLUSIONS: Evidence from these case studies suggests that language impairment features in AD are not universal across languages, particularly in comparison to impairments typically associated with language breakdowns in English. This study underscores the importance of establishing connected speech profiles in AD for non-English-speaking populations, especially for structurally different languages. This would in turn lead to the development of language-specific markers that can facilitate early detection of language deterioration and aid in improving diagnosis of AD in individuals belonging to underserved linguistically diverse populations. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25412458.


Assuntos
Doença de Alzheimer , Transtornos do Desenvolvimento da Linguagem , Multilinguismo , Humanos , Fala , Idioma
3.
JASA Express Lett ; 4(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38573045

RESUMO

The present study examined English vowel recognition in multi-talker babbles (MTBs) in 20 normal-hearing, native-English-speaking adult listeners. Twelve vowels, embedded in the h-V-d structure, were presented in MTBs consisting of 1, 2, 4, 6, 8, 10, and 12 talkers (numbers of talkers [N]) and a speech-shaped noise at signal-to-noise ratios of -12, -6, and 0 dB. Results showed that vowel recognition performance was a non-monotonic function of N when signal-to-noise ratios were less favorable. The masking effects of MTBs on vowel recognition were most similar to consonant recognition but less so to word and sentence recognition reported in previous studies.


Assuntos
Idioma , Fala , Adulto , Humanos , Reconhecimento Psicológico , Razão Sinal-Ruído
4.
PLoS One ; 19(4): e0301514, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38564597

RESUMO

Evoked potential studies have shown that speech planning modulates auditory cortical responses. The phenomenon's functional relevance is unknown. We tested whether, during this time window of cortical auditory modulation, there is an effect on speakers' perceptual sensitivity for vowel formant discrimination. Participants made same/different judgments for pairs of stimuli consisting of a pre-recorded, self-produced vowel and a formant-shifted version of the same production. Stimuli were presented prior to a "go" signal for speaking, prior to passive listening, and during silent reading. The formant discrimination stimulus /uh/ was tested with a congruent productions list (words with /uh/) and an incongruent productions list (words without /uh/). Logistic curves were fitted to participants' responses, and the just-noticeable difference (JND) served as a measure of discrimination sensitivity. We found a statistically significant effect of condition (worst discrimination before speaking) without congruency effect. Post-hoc pairwise comparisons revealed that JND was significantly greater before speaking than during silent reading. Thus, formant discrimination sensitivity was reduced during speech planning regardless of the congruence between discrimination stimulus and predicted acoustic consequences of the planned speech movements. This finding may inform ongoing efforts to determine the functional relevance of the previously reported modulation of auditory processing during speech planning.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Acústica , Movimento , Fonética , Acústica da Fala
5.
Elife ; 122024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38577982

RESUMO

A core aspect of human speech comprehension is the ability to incrementally integrate consecutive words into a structured and coherent interpretation, aligning with the speaker's intended meaning. This rapid process is subject to multidimensional probabilistic constraints, including both linguistic knowledge and non-linguistic information within specific contexts, and it is their interpretative coherence that drives successful comprehension. To study the neural substrates of this process, we extract word-by-word measures of sentential structure from BERT, a deep language model, which effectively approximates the coherent outcomes of the dynamic interplay among various types of constraints. Using representational similarity analysis, we tested BERT parse depths and relevant corpus-based measures against the spatiotemporally resolved brain activity recorded by electro-/magnetoencephalography when participants were listening to the same sentences. Our results provide a detailed picture of the neurobiological processes involved in the incremental construction of structured interpretations. These findings show when and where coherent interpretations emerge through the evaluation and integration of multifaceted constraints in the brain, which engages bilateral brain regions extending beyond the classical fronto-temporal language system. Furthermore, this study provides empirical evidence supporting the use of artificial neural networks as computational models for revealing the neural dynamics underpinning complex cognitive processes in the brain.


Assuntos
Compreensão , Fala , Humanos , Encéfalo , Magnetoencefalografia/métodos , Idioma
6.
JASA Express Lett ; 4(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38568027

RESUMO

This study investigates speech production under various room acoustic conditions in virtual environments, by comparing vocal behavior and the subjective experience of speaking in four real rooms and their audio-visual virtual replicas. Sex differences were explored. Males and females (N = 13) adjusted their voice levels similarly to room acoustic changes in the real rooms, but only males did so in the virtual rooms. Females, however, rated the visual virtual environment as more realistic compared to males. This suggests a discrepancy between sexes regarding the experience of realism in a virtual environment and changes in objective behavioral measures such as voice level.


Assuntos
Caracteres Sexuais , Fala , Feminino , Masculino , Humanos , Acústica
7.
Sci Rep ; 14(1): 8181, 2024 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589483

RESUMO

Temporal envelope modulations (TEMs) are one of the most important features that cochlear implant (CI) users rely on to understand speech. Electroencephalographic assessment of TEM encoding could help clinicians to predict speech recognition more objectively, even in patients unable to provide active feedback. The acoustic change complex (ACC) and the auditory steady-state response (ASSR) evoked by low-frequency amplitude-modulated pulse trains can be used to assess TEM encoding with electrical stimulation of individual CI electrodes. In this study, we focused on amplitude modulation detection (AMD) and amplitude modulation frequency discrimination (AMFD) with stimulation of a basal versus an apical electrode. In twelve adult CI users, we (a) assessed behavioral AMFD thresholds and (b) recorded cortical auditory evoked potentials (CAEPs), AMD-ACC, AMFD-ACC, and ASSR in a combined 3-stimulus paradigm. We found that the electrophysiological responses were significantly higher for apical than for basal stimulation. Peak amplitudes of AMFD-ACC were small and (therefore) did not correlate with speech-in-noise recognition. We found significant correlations between speech-in-noise recognition and (a) behavioral AMFD thresholds and (b) AMD-ACC peak amplitudes. AMD and AMFD hold potential to develop a clinically applicable tool for assessing TEM encoding to predict speech recognition in CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Psicoacústica , Percepção da Fala/fisiologia , Fala , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia
8.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38566511

RESUMO

This study investigates neural processes in infant speech processing, with a focus on left frontal brain regions and hemispheric lateralization in Mandarin-speaking infants' acquisition of native tonal categories. We tested 2- to 6-month-old Mandarin learners to explore age-related improvements in tone discrimination, the role of inferior frontal regions in abstract speech category representation, and left hemisphere lateralization during tone processing. Using a block design, we presented four Mandarin tones via [ta] and measured oxygenated hemoglobin concentration with functional near-infrared spectroscopy. Results showed age-related improvements in tone discrimination, greater involvement of frontal regions in older infants indicating abstract tonal representation development and increased bilateral activation mirroring native adult Mandarin speakers. These findings contribute to our broader understanding of the relationship between native speech acquisition and infant brain development during the critical period of early language learning.


Assuntos
Percepção da Fala , Fala , Adulto , Lactente , Humanos , Idoso , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Desenvolvimento da Linguagem , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia
9.
Sci Rep ; 14(1): 7697, 2024 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565624

RESUMO

The rapid increase in biomedical publications necessitates efficient systems to automatically handle Biomedical Named Entity Recognition (BioNER) tasks in unstructured text. However, accurately detecting biomedical entities is quite challenging due to the complexity of their names and the frequent use of abbreviations. In this paper, we propose BioBBC, a deep learning (DL) model that utilizes multi-feature embeddings and is constructed based on the BERT-BiLSTM-CRF to address the BioNER task. BioBBC consists of three main layers; an embedding layer, a Long Short-Term Memory (Bi-LSTM) layer, and a Conditional Random Fields (CRF) layer. BioBBC takes sentences from the biomedical domain as input and identifies the biomedical entities mentioned within the text. The embedding layer generates enriched contextual representation vectors of the input by learning the text through four types of embeddings: part-of-speech tags (POS tags) embedding, char-level embedding, BERT embedding, and data-specific embedding. The BiLSTM layer produces additional syntactic and semantic feature representations. Finally, the CRF layer identifies the best possible tag sequence for the input sentence. Our model is well-constructed and well-optimized for detecting different types of biomedical entities. Based on experimental results, our model outperformed state-of-the-art (SOTA) models with significant improvements based on six benchmark BioNER datasets.


Assuntos
Idioma , Semântica , Processamento de Linguagem Natural , Benchmarking , Fala
10.
Ann Plast Surg ; 92(4S Suppl 2): S101-S104, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38556656

RESUMO

BACKGROUND: Pharyngeal flap (PF) surgery is effective at improving velopharyngeal sufficiency, but historical literature shows a concerning prevalence rate of obstructive sleep apnea (OSA), reported as high as 20%. Our institution has developed a protocol to minimize risk of postoperative obstructive complications and increase safety of PF surgery. We hypothesize that (1) preoperative staged removal of significant adenotonsillar tissue along with (2) multiview videofluoroscopy to guide patient-specific surgical approach via appropriately sized PFs can result in excellent speech outcomes while limiting occurrence of OSA. METHODS: This was a retrospective chart review of all patients with velopharyngeal insufficiency (VPI) (aged 2-20 years) seen at the University of Rochester from 2015 to 2022 undergoing PF surgery to correct VPI. Nasopharyngoscopy was used for surgical planning and airway evaluation. Patients with tonsillar and adenoid hypertrophy underwent staged adenotonsillectomy at least 2 months before PF. Multiview videofluoroscopy was used to identify anatomic causes of VPI and to determine PF width. Patients underwent polysomnography and speech evaluation before and at least 6 months after PF surgery. RESULTS: Forty-one children aged 8.5 ± 4.1 years (range, 4 to 18 years) who underwent posterior PF surgery for VPI were identified. This included 10 patients with 22q11.2 deletion and 4 patients with Pierre Robin sequence. Thirty-nine patients had both pre- and postoperative speech data and underwent both a pre- and postoperative sleep study. Polysomnography showed no significant difference in obstructive apnea-hypopnea index after posterior PF surgery (obstructive apnea-hypopnea index preop, 1.3 ± 1.2 events per hour; postop, 1.7 ± 2.1 events per hour; P = 0.111). Significant improvements in speech outcome were seen in patients who underwent PF (modified Pittsburgh score preop, 11.52 ± 1.37; postop, 1.09 ± 2.35; P < 0.05). CONCLUSIONS: Use of preoperative staged adenotonsillectomy as well as patient-specific PF dimensions results in effective resolution of VPI and a low risk of OSA.


Assuntos
Apneia Obstrutiva do Sono , Insuficiência Velofaríngea , Criança , Humanos , Fala , Estudos Retrospectivos , Procedimentos Clínicos , Faringe/cirurgia , Insuficiência Velofaríngea/cirurgia , Insuficiência Velofaríngea/complicações , Apneia Obstrutiva do Sono/etiologia , Complicações Pós-Operatórias/epidemiologia , Resultado do Tratamento
11.
Sci Rep ; 14(1): 5515, 2024 03 06.
Artigo em Inglês | MEDLINE | ID: mdl-38448417

RESUMO

Heterogeneity in speech under stress has been a recurring issue in stress research, potentially due to varied stress induction paradigms. This study investigated speech features in semi-guided speech following two distinct psychosocial stress paradigms (Cyberball and MIST) and their respective control conditions. Only negative affect increased during Cyberball, while self-reported stress, skin conductance response rate, and negative affect increased during MIST. Fundamental frequency (F0), speech rate, and jitter significantly changed during MIST, but not Cyberball; HNR and shimmer showed no expected changes. The results indicate that observed speech features are robust in semi-guided speech and sensitive to stressors eliciting additional physiological stress responses, not solely decreases in negative affect. These differences between stressors may explain literature heterogeneity. Our findings support the potential of speech as a stress level biomarker, especially when stress elicits physiological reactions, similar to other biomarkers. This highlights its promise as a tool for measuring stress in everyday settings, considering its affordability, non-intrusiveness, and ease of collection. Future research should test these results' robustness and specificity in naturalistic settings, such as freely spoken speech and noisy environments while exploring and validating a broader range of informative speech features in the context of stress.


Assuntos
Acústica , Fala , Humanos , Estresse Fisiológico , Autorrelato
12.
Sensors (Basel) ; 24(5)2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38474965

RESUMO

Deep learning promotes the breakthrough of emotion recognition in many fields, especially speech emotion recognition (SER). As an important part of speech emotion recognition, the most relevant acoustic feature extraction has always attracted the attention of existing researchers. Aiming at the problem that the emotional information contained in the current speech signals is distributed dispersedly and cannot comprehensively integrate local and global information, this paper presents a network model based on a gated recurrent unit (GRU) and multi-head attention. We evaluate our proposed emotion model on the IEMOCAP and Emo-DB corpora. The experimental results show that the network model based on Bi-GRU and multi-head attention is significantly better than the traditional network model at detecting multiple evaluation indicators. At the same time, we also apply the model to a speech sentiment analysis task. On the CH-SIMS and MOSI datasets, the model shows excellent generalization performance.


Assuntos
Percepção , Fala , Acústica , Emoções , Reconhecimento Psicológico
13.
Sensors (Basel) ; 24(5)2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38475034

RESUMO

Parkinson's disease (PD) is a neurodegenerative disorder characterized by a range of motor and non-motor symptoms. One of the notable non-motor symptoms of PD is the presence of vocal disorders, attributed to the underlying pathophysiological changes in the neural control of the laryngeal and vocal tract musculature. From this perspective, the integration of machine learning (ML) techniques in the analysis of speech signals has significantly contributed to the detection and diagnosis of PD. Particularly, MEL Frequency Cepstral Coefficients (MFCCs) and Gammatone Frequency Cepstral Coefficients (GTCCs) are both feature extraction techniques commonly used in the field of speech and audio signal processing that could exhibit great potential for vocal disorder identification. This study presents a novel approach to the early detection of PD through ML applied to speech analysis, leveraging both MFCCs and GTCCs. The recordings contained in the Mobile Device Voice Recordings at King's College London (MDVR-KCL) dataset were used. These recordings were collected from healthy individuals and PD patients while they read a passage and during a spontaneous conversation on the phone. Particularly, the speech data regarding the spontaneous dialogue task were processed through speaker diarization, a technique that partitions an audio stream into homogeneous segments according to speaker identity. The ML applied to MFCCS and GTCCs allowed us to classify PD patients with a test accuracy of 92.3%. This research further demonstrates the potential to employ mobile phones as a non-invasive, cost-effective tool for the early detection of PD, significantly improving patient prognosis and quality of life.


Assuntos
Doença de Parkinson , Fala , Humanos , Doença de Parkinson/diagnóstico , Qualidade de Vida , Aprendizado de Máquina , Músculos Laríngeos
14.
Sensors (Basel) ; 24(5)2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38475158

RESUMO

Since the advent of modern computing, researchers have striven to make the human-computer interface (HCI) as seamless as possible. Progress has been made on various fronts, e.g., the desktop metaphor (interface design) and natural language processing (input). One area receiving attention recently is voice activation and its corollary, computer-generated speech. Despite decades of research and development, most computer-generated voices remain easily identifiable as non-human. Prosody in speech has two primary components-intonation and rhythm-both often lacking in computer-generated voices. This research aims to enhance computer-generated text-to-speech algorithms by incorporating melodic and prosodic elements of human speech. This study explores a novel approach to add prosody by using machine learning, specifically an LSTM neural network, to add paralinguistic elements to a recorded or generated voice. The aim is to increase the realism of computer-generated text-to-speech algorithms, to enhance electronic reading applications, and improved artificial voices for those in need of artificial assistance to speak. A computer that is able to also convey meaning with a spoken audible announcement will also improve human-to-computer interactions. Applications for the use of such an algorithm may include improving high-definition audio codecs for telephony, renewing old recordings, and lowering barriers to the utilization of computing. This research deployed a prototype modular platform for digital speech improvement by analyzing and generalizing algorithms into a modular system through laboratory experiments to optimize combinations and performance in edge cases. The results were encouraging, with the LSTM-based encoder able to produce realistic speech. Further work will involve optimizing the algorithm and comparing its performance against other approaches.


Assuntos
Percepção da Fala , Fala , Fala/fisiologia , Percepção da Fala/fisiologia , Computadores , Aprendizado de Máquina
15.
Cell Mol Life Sci ; 81(1): 129, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38472514

RESUMO

Recent work putatively linked a rare genetic variant of the chaperone Resistant to Inhibitors of acetylcholinesterase (RIC3) (NM_024557.4:c.262G > A, NP_078833.3:p.G88R) to a unique ability to speak backwards, a language skill that is associated with exceptional working memory capacity. RIC3 is important for the folding, maturation, and functional expression of α7 nicotinic acetylcholine receptors (nAChR). We compared and contrasted the effects of RIC3G88R on assembly, cell surface expression, and function of human α7 receptors using fluorescent protein tagged α7 nAChR and Förster resonance energy transfer (FRET) microscopy imaging in combination with functional assays and 125I-α-bungarotoxin binding. As expected, the wild-type RIC3 protein was found to increase both cell surface and functional expression of α7 receptors. In contrast, the variant form of RIC3 decreased both. FRET analysis showed that RICG88R increased the interactions between RIC3 and α7 protein in the endoplasmic reticulum. These results provide interesting and novel data to show that a RIC3 variant alters the interaction of RIC3 and α7, which translates to decreased cell surface and functional expression of α7 nAChR.


Assuntos
Receptores Nicotínicos , Humanos , Acetilcolinesterase/metabolismo , Receptor Nicotínico de Acetilcolina alfa7/metabolismo , Membrana Celular/metabolismo , Peptídeos e Proteínas de Sinalização Intracelular/metabolismo , Receptores Nicotínicos/genética , Fala
16.
J Int Adv Otol ; 20(1): 62-68, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38454291

RESUMO

BACKGROUND: Neuroanatomical evidence suggests that behavioral speech-in-noise (SiN) perception and the underlying cortical structural network are altered by aging, and these aging-induced changes could be initiated during middle age. However, the mechanism behind the relationship between auditory performance and neural substrates of speech perception in middle-aged individuals remains unclear. In this study, we measured the structural volumes of selected neuroanatomical regions involved in speech and hearing processing to establish their association with speech perception ability in middle-aged adults. METHODS: Sentence perception in quiet and noisy conditions was behaviorally measured in 2 different age groups: young (20-39 years old) and middle-aged (40-59-year-old) adults. Anatomical magnetic resonance images were taken to assess the gray matter volume of specific parcellated brain areas associated with speech perception. The relationships between these and behavioral auditory performance with age were determined. RESULTS: The middle-aged adults showed poorer speech perception in both quiet and noisy conditions than the young adults. Neuroanatomical data revealed that the normalized gray matter volume in the left superior temporal gyrus, which is closely related to acoustic and phonological processing, is associated with behavioral SiN perception in the middle-aged group. In addition, the normalized gray matter volumes in multiple cortical areas seem to decrease with age. CONCLUSION: The results indicate that SiN perception in middle-aged adults is closely related to the brain region responsible for lower-level speech processing, which involves the detection and phonemic representation of speech. Nonetheless, the higher-order cortex may also contribute to age-induced changes in auditory performance.


Assuntos
Substância Cinzenta , Percepção da Fala , Pessoa de Meia-Idade , Adulto Jovem , Humanos , Adulto , Substância Cinzenta/diagnóstico por imagem , Fala , Ruído , Audição , Lobo Temporal/diagnóstico por imagem
17.
BMC Public Health ; 24(1): 732, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38454406

RESUMO

BACKGROUND: This study examined the relationship between speech-in-noise recognition and incident/recurrent falls due to balance problems ten years later (RQ-1); 10-year change in speech-in-noise recognition and falls (RQ-2a), as well as the role of dizziness in this relationship (RQ-2b). The association between hearing aid use and falls was also examined (RQ-3). METHODS: Data was collected from the Netherlands Longitudinal Study on Hearing between 2006 and December 2022. Participants completed an online survey and digits-in-noise test every five years. For this study, data was divided into two 10-year follow-up time intervals: T0 (baseline) to T2 (10-year follow-up), and T1 (5-years) to T3 (15-years). For all RQs, participants aged ≥ 40 years at baseline, without congenital hearing loss, and non-CI users were eligible (n = 592). Additionally, for RQ-3 participants with a speech reception threshold in noise (SRTn) ≥ -5.5 dB signal-to-noise ratio were included (n = 422). Analyses used survey variables on hearing, dizziness, falls due to balance problems, chronic health conditions, and psychosocial health. Logistic regressions using General Estimating Equations were conducted to assess all RQs. RESULTS: Among individuals with obesity, those with poor baseline SRTn had a higher odds of incident falls ten years later (odds ratio (OR):14.7, 95% confidence interval (CI) [2.12, 103]). A 10-year worsening of SRTn was significantly associated with a higher odds of recurrent (OR: 2.20, 95% CI [1.03, 4.71]) but not incident falls. No interaction was found between dizziness and change in SRTn. Hearing aid use (no use/ < 2 years use vs. ≥ 2 years) was not significantly associated with incident nor recurrent falls. Although there was a significant interaction with sex for this association, the effect of hearing aid use on incident/recurrent falls was not statistically significant among males nor females. CONCLUSIONS: A longitudinal association between the deterioration in SRTn and recurrent falls due to balance problems after 10 years was confirmed in this study. This result stresses the importance of identifying declines in hearing earlier and justifies including hearing ability assessments within fall risk prevention programs. Mixed results of hearing aid use on fall risk warrant further investigation into the temporality of this association and possible differences between men and women.


Assuntos
Tontura , Percepção da Fala , Masculino , Humanos , Feminino , Estudos Longitudinais , Tontura/epidemiologia , Tontura/etiologia , Fala , Estudos de Coortes
18.
J Acoust Soc Am ; 155(3): 1895-1908, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38456732

RESUMO

Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Percepção Auditiva , Fala
19.
Sci Rep ; 14(1): 5108, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429404

RESUMO

Self-agency is the awareness of being the agent of one's own thoughts and actions. Self-agency is essential for interacting with the outside world (reality-monitoring). The medial prefrontal cortex (mPFC) is thought to be one neural correlate of self-agency. We investigated whether mPFC activity can causally modulate self-agency on two different tasks of speech-monitoring and reality-monitoring. The experience of self-agency is thought to result from making reliable predictions about the expected outcomes of one's own actions. This self-prediction ability is necessary for the encoding and memory retrieval of one's own thoughts during reality-monitoring to enable accurate judgments of self-agency. This self-prediction ability is also necessary for speech-monitoring where speakers consistently compare auditory feedback (what we hear ourselves say) with what we expect to hear while speaking. In this study, 30 healthy participants are assigned to either 10 Hz repetitive transcranial magnetic stimulation (rTMS) to enhance mPFC excitability (N = 15) or 10 Hz rTMS targeting a distal temporoparietal site (N = 15). High-frequency rTMS to mPFC enhanced self-predictions during speech-monitoring that predicted improved self-agency judgments during reality-monitoring. This is the first study to provide robust evidence for mPFC underlying a causal role in self-agency, that results from the fundamental ability of improving self-predictions across two different tasks.


Assuntos
Memória , Fala , Humanos , Memória/fisiologia , Estimulação Magnética Transcraniana/métodos , Córtex Pré-Frontal/fisiologia , Julgamento
20.
Otol Neurotol ; 45(4): 386-391, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38437818

RESUMO

OBJECTIVE: To report speech recognition outcomes and processor use based on timing of cochlear implant (CI) activation. STUDY DESIGN: Retrospective cohort. SETTING: Tertiary referral center. PATIENTS: A total of 604 adult CI recipients from October 2011 to March 2022, stratified by timing of CI activation (group 1: ≤10 d, n = 47; group 2: >10 d, n = 557). MAIN OUTCOME MEASURES: Average daily processor use; Consonant-Nucleus-Consonant (CNC) and Arizona Biomedical (AzBio) in quiet at 1-, 3-, 6-, and 12-month visits; time to peak performance. RESULTS: The groups did not differ in sex ( p = 0.887), age at CI ( p = 0.109), preoperative CNC ( p = 0.070), or preoperative AzBio in quiet ( p = 0.113). Group 1 had higher median daily processor use than group 2 at the 1-month visit (12.3 versus 10.7 h/d, p = 0.017), with no significant differences at 3, 6, and 12 months. The early activation group had superior median CNC performance at 3 months (56% versus 46%, p = 0.007) and 12 months (60% versus 52%, p = 0.044). Similarly, the early activation group had superior median AzBio in quiet performance at 3 months (72% versus 59%, p = 0.008) and 12 months (75% versus 68%, p = 0.049). Both groups were equivalent in time to peak performance for CNC and AzBio. Earlier CI activation was significantly correlated with higher average daily processor use at all follow-up intervals. CONCLUSION: CI activation within 10 days of surgery is associated with increased early device usage and superior speech recognition at both early and late follow-up visits. Timing of activation and device usage are modifiable factors that can help optimize postoperative outcomes in the CI population.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Estudos Retrospectivos , Percepção da Fala/fisiologia , Fala , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...