Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 15.442
Filtrar
1.
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1535343

RESUMO

Introducción: La esclerosis lateral amiotrófica (ELA) es la forma más común de enfermedad degenerativa de motoneurona en la edad adulta y es considerada una enfermedad terminal. Por lo mismo, el accionar del fonoaudiólogo debe considerar el respeto a los principios bioéticos básicos para garantizar una asistencia adecuada. Objetivo: Conocer aquellas consideraciones bioéticas relacionadas al manejo y estudio de personas con ELA para luego brindar una aproximación hacia el quehacer fonoaudiológico. Método: Se efectuó una búsqueda bibliográfica en las bases de datos PubMed, Scopus y SciELO. Se filtraron artículos publicados desde 2000 hasta junio de 2023 y fueron seleccionados aquellos que abordaban algún componente bioético en población con ELA. Resultados: Aspectos relacionados al uso del consentimiento informado y a la toma de decisiones compartidas destacaron como elementos esenciales para apoyar la autonomía de las personas. Conclusión: Una correcta comunicación y una toma de decisiones compartida son claves para respetar la autonomía de las personas. A su vez, la estandarización de procedimientos mediante la investigación clínica permitirá aportar al cumplimiento de los principios bioéticos de beneficencia y no maleficencia, indispensables para la práctica profesional.


Introduction: Amyotrophic lateral sclerosis (ALS) is the most common form of degenerative motor neuron disease in adulthood and is considered a terminal disease. For this reason, the actions of the speech therapist must consider respect for basic bioethical principles to guarantee adequate assistance. Objective: To know those bioethical considerations related to the management and study of people with ALS to then provide an approach to speech therapy. Methodology: A bibliographic search was carried out in the PubMed, Scopus, and SciELO databases. Articles published from 2000 to June 2023 were filtered and those that addressed a bioethical component in the population with ALS were selected. Results: Aspects related to the use of informed consent and shared decision-making stood out as essential elements to support people's autonomy. Conclusion: Proper communication and shared decision-making are key to respecting people's autonomy. In turn, the standardization of procedures through clinical research will contribute to compliance with the bioethical principles of beneficence and non-maleficence, essential for professional practice.

2.
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1535347

RESUMO

In a context where different protocols for recommended practices in clinical voice assessment exist, while there are gaps in the literature regarding the evidence base supporting assessment procedures and measures, clinicians from regions where a strong community holding expertise in clinical and scientific voice practices lack can struggle to confidently develop their voice assessment practices. In an effort to improve voice assessment practices and strengthen professional identity among speech-language pathologists in Quebec, Canada, a community of practice (CoP) was established, with the aim of promoting knowledge sharing, implementing change in clinical practice, and improving professional identity. Thirty-nine participants took part in the CoP activities conducted over a four-month period, including virtual meetings and in-person workshops. Participants had a high rate of attendance (> 74% participation rate in virtual meetings), and were highly satisfied with their participation and intended to remain involved after the project's end. Statistically significant changes in voice assessment practices were observed post-CoP, regarding probability of performing assessments (p < .001), and perceived importance of assessment for evaluative purposes (p <.001), as well as improvements in assessment specific confidence, specifically for procedure of auditory-perceptual assessment (p < .001) and purpose of aerodynamic assessment (p = .05). Moreover, there was an increase in professional identity post-CoP (p < .001) and participants felt they made significant learnings. The present study highlighted the need to involve SLPs in future research to identify assessments that are relevant to the specific evaluative objectives of SLPs working with voice, and suggests CoPs are an efficient tool for that purpose.


En un contexto en el que existen diferentes protocolos para las prácticas recomendadas en la evaluación vocal clínica, y en el que se presentan vacíos en la literatura respecto a la base de evidencia que respalda los procedimientos y medidas de evaluación, los profesionales de regiones donde no hay una comunidad sólida con experiencia en prácticas vocales clínicas y científicas pueden enfrentar dificultades para desarrollar con confianza sus prácticas de evaluación vocal. Con el propósito de mejorar las prácticas de evaluación vocal y fortalecer la identidad profesional entre los logopedas de Quebec, Canadá, se estableció una comunidad de práctica (CdP). Esta tenía como objetivo fomentar el intercambio de conocimientos, implementar cambios en la práctica clínica y mejorar la identidad profesional. Un total de treinta y nueve participantes se involucraron en las actividades de la CdP, llevadas a cabo durante un período de cuatro meses, que incluyeron reuniones virtuales y talleres presenciales. Los participantes tuvieron una alta tasa de asistencia (> 74% de participación en las reuniones virtuales) y expresaron un alto grado de satisfacción con su participación, manifestando su intención de continuar involucrados después de la finalización del proyecto. Se observaron cambios estadísticamente significativos en las prácticas de evaluación vocal posterior a la CdP, en lo que respecta a la probabilidad de llevar a cabo evaluaciones (p < .001) y la percepción de la importancia de la evaluación con fines evaluativos (p < .001), así como mejoras en la confianza específica en la evaluación, particularmente en el procedimiento de evaluación auditivo-perceptual (p < .001) y el propósito de la evaluación aerodinámica (p = .05). Además, se registró un aumento en la identidad profesional posterior a la CdP (p < .001) y los participantes sintieron que obtuvieron aprendizajes significativos. El presente estudio destacó la necesidad de involucrar a los logopedas en investigaciones futuras, para identificar evaluaciones pertinentes a los objetivos evaluativos específicos de los logopedas que trabajan con la voz, y sugiere que las CdP son una herramienta eficiente con ese propósito.

3.
Front Psychol ; 15: 1305134, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38721314

RESUMO

The article reports the results of a study on the perception of reduced forms by non-native users of English. It tests three hypotheses: (i) reduced forms with context are recognized more accurately and faster than reduced forms without context; (ii) gradient reduction is perceived less robustly than the categorical one; and (iii) subjects with musical background perceive reduced forms better than those without. An E-Prime study on 102 Polish learners of English was implemented, comparing participants' accuracy and reaction times with a control group of 14 native speakers. The study was corpus-based and used 287 reduced forms from a corpus of Lancashire. The results indicate that (i) lexical context and phone density significantly affect perception, (ii) the category of reduction process (gradient or categorical) is irrelevant, and (iii) musical background only partially impacts non-native perception.

4.
J Commun Disord ; 109: 106428, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38744198

RESUMO

PURPOSE: This study examines whether there are differences in the speech of speakers with dysarthria, speakers with apraxia and healthy speakers in spectral acoustic measures during production of the central-peninsular Spanish alveolar sibilant fricative /s/. METHOD: To this end, production of the sibilant was analyzed in 20 subjects with dysarthria, 8 with apraxia of speech and 28 healthy speakers. Participants produced 12 sV(C) words. The variables compared across groups were the fricative's spectral amplitude difference (AmpD) and spectral moments in the temporal midpoint of fricative execution. RESULTS: The results indicate that individuals with dysarthria can be distinguished from healthy speakers in terms of the spectral characteristics AmpD, standard deviation (SD), center of gravity (CoG) and skewness, the last two in context with unrounded vowel, while no differences in kurtosis were detected. Participants with AoS group differ significantly from healthy speaker group in AmpD, SD and CoG and Kurtosis, the first one followed unrounded vowel and the latter two followed by rounded vowels. In addition, speakers with apraxia of speech group returned significant differences with respect to speakers with dysarthria group in AmpD, CoG and skewness. CONCLUSIONS: The differences found between the groups in the measures studied as a function of the type of vowel context could provide insights into the distinctive manifestations of motor speech disorders, contributing to the differential diagnosis between apraxia and dysarthria in motor control processes.

5.
Diagnostics (Basel) ; 14(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732310

RESUMO

This study introduces a specialized Automatic Speech Recognition (ASR) system, leveraging the Whisper Large-v2 model, specifically adapted for radiological applications in the French language. The methodology focused on adapting the model to accurately transcribe medical terminology and diverse accents within the French language context, achieving a notable Word Error Rate (WER) of 17.121%. This research involved extensive data collection and preprocessing, utilizing a wide range of French medical audio content. The results demonstrate the system's effectiveness in transcribing complex radiological data, underscoring its potential to enhance medical documentation efficiency in French-speaking clinical settings. The discussion extends to the broader implications of this technology in healthcare, including its potential integration with electronic health records (EHRs) and its utility in medical education. This study also explores future research directions, such as tailoring ASR systems to specific medical specialties and languages. Overall, this research contributes significantly to the field of medical ASR systems, presenting a robust tool for radiological transcription in the French language and paving the way for advanced technology-enhanced healthcare solutions.

6.
Arch Plast Surg ; 51(3): 275-283, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38737847

RESUMO

Background Shortage of speech and language therapists results in lack of speech services. The aims of this study were to find the effectiveness of a combination speech therapy model at Level IV: General speech and language pathologist (GSLP) and Level V: Specific speech and language pathologist (SSLP) in reduction of the number of articulation errors and promotion the quality of life (QoL) for children with cleft palate with or without cleft lip (CP ± L). Methods Fifteen children with CP ± L, aged 4 years 1 month to 10 years 9 months (median = 76 months; minimum:maximum = 49:129 months) were enrolled in this study. Pre- and post-assessment included oral peripheral examination; articulation tests via Articulation Screening Test, Thai Universal Parameters of Speech Outcomes for People with Cleft Palate, Hearing Evaluation, The World Health Organization Quality of Life Brief_Thai (WHOQOL-BRIEF-THAI) version questionnaire for QoL were performed. Speech therapy included a 3-day intensive speech camp by SSLP, five 30-minute speech therapy sessions by a GSLP, and five 1-day follow-up speech camps by SSLP that provided four 45-minute speech therapy sessions for each child. Results Post-articulation revealed statistically significant reduction of the numbers of articulation errors at word, sentence, and screening levels (median difference [MD] = 3, 95% confidence interval [CI] = 2-5; MD = 6, 95% CI = 4.5-8; MD = 2.25, 95% CI = 1.5-3, respectively) and improvement of QoL. Conclusion A speech task force consisting of a combination of Level IV: GSLP and Level V: SSLP could significantly reduce the number of articulation errors and promote QoL.

7.
J Thorac Dis ; 16(4): 2654-2667, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38738242

RESUMO

Background and Objective: Obstructive sleep apnea (OSA) is a common chronic disorder characterized by repeated breathing pauses during sleep caused by upper airway narrowing or collapse. The gold standard for OSA diagnosis is the polysomnography test, which is time consuming, expensive, and invasive. In recent years, more cost-effective approaches for OSA detection based in predictive value of speech and snoring has emerged. In this paper, we offer a comprehensive summary of current research progress on the applications of speech or snoring sounds for the automatic detection of OSA and discuss the key challenges that need to be overcome for future research into this novel approach. Methods: PubMed, IEEE Xplore, and Web of Science databases were searched with related keywords. Literature published between 1989 and 2022 examining the potential of using speech or snoring sounds for automated OSA detection was reviewed. Key Content and Findings: Speech and snoring sounds contain a large amount of information about OSA, and they have been extensively studied in the automatic screening of OSA. By importing features extracted from speech and snoring sounds into artificial intelligence models, clinicians can automatically screen for OSA. Features such as formant, linear prediction cepstral coefficients, mel-frequency cepstral coefficients, and artificial intelligence algorithms including support vector machines, Gaussian mixture model, and hidden Markov models have been extensively studied for the detection of OSA. Conclusions: Due to the significant advantages of noninvasive, low-cost, and contactless data collection, an automatic approach based on speech or snoring sounds seems to be a promising tool for the detection of OSA.

8.
Front Robot AI ; 11: 1362463, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38726067

RESUMO

The condition for artificial agents to possess perceivable intentions can be considered that they have resolved a form of the symbol grounding problem. Here, the symbol grounding is considered an achievement of the state where the language used by the agent is endowed with some quantitative meaning extracted from the physical world. To achieve this type of symbol grounding, we adopt a method for characterizing robot gestures with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text. In this method, a "size image" of a word is generated by defining an axis (index) that discriminates the "size" of the word in the word-distributed vector space. The generated size images are converted into gestures generated by a physical artificial agent (robot). The robot's gesture can be set to reflect either the size of the word in terms of the amount of movement or in terms of its posture. To examine the perception of communicative intention in the robot that performs the gestures generated as described above, the authors examine human ratings on "the naturalness" obtained through an online survey, yielding results that partially validate our proposed method. Based on the results, the authors argue for the possibility of developing advanced artifacts that achieve human-like symbolic grounding.

9.
Psychol Sci ; : 9567976241243004, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38728320

RESUMO

It is commonly assumed that inner speech-the experience of thought as occurring in a natural language-is a human universal. Recent evidence, however, suggests that the experience of inner speech in adults varies from near constant to nonexistent. We propose a name for a lack of the experience of inner speech-anendophasia-and report four studies examining some of its behavioral consequences. We found that adults who reported low levels of inner speech (N = 46) had lower performance on a verbal working memory task and more difficulty performing rhyme judgments compared with adults who reported high levels of inner speech (N = 47). Task-switching performance-previously linked to endogenous verbal cueing-and categorical effects on perceptual judgments were unrelated to differences in inner speech.

10.
Hear Res ; 447: 109023, 2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38733710

RESUMO

Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.

11.
Brain Commun ; 6(3): fcae129, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38707712

RESUMO

Stroke is the leading cause of long-term disability worldwide. Incurred brain damage can disrupt cognition, often with persisting deficits in language and executive capacities. Yet, despite their clinical relevance, the commonalities and differences between language versus executive control impairments remain under-specified. To fill this gap, we tailored a Bayesian hierarchical modelling solution in a largest-of-its-kind cohort (1080 patients with stroke) to deconvolve language and executive control with respect to the stroke topology. Cognitive function was assessed with a rich neuropsychological test battery including global cognitive function (tested with the Mini-Mental State Exam), language (assessed with a picture naming task), executive speech function (tested with verbal fluency tasks), executive control functions (Trail Making Test and Digit Symbol Coding Task), visuospatial functioning (Rey Complex Figure), as well as verbal learning and memory function (Soul Verbal Learning). Bayesian modelling predicted interindividual differences in eight cognitive outcome scores three months after stroke based on specific tissue lesion topologies. A multivariate factor analysis extracted four distinct cognitive factors that distinguish left- and right-hemispheric contributions to ischaemic tissue lesions. These factors were labelled according to the neuropsychological tests that had the strongest factor loadings: One factor delineated language and general cognitive performance and was mainly associated with damage to left-hemispheric brain regions in the frontal and temporal cortex. A factor for executive control summarized mental flexibility, task switching and visual-constructional abilities. This factor was strongly related to right-hemispheric brain damage of posterior regions in the occipital cortex. The interplay of language and executive control was reflected in two distinct factors that were labelled as executive speech functions and verbal memory. Impairments on both factors were mainly linked to left-hemispheric lesions. These findings shed light onto the causal implications of hemispheric specialization for cognition; and make steps towards subgroup-specific treatment protocols after stroke.

12.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Artigo em Inglês | MEDLINE | ID: mdl-38735013

RESUMO

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Assuntos
Atenção , Nomes , Percepção da Fala , Humanos , Atenção/fisiologia , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto , Adulto Jovem , Fala/fisiologia , Tempo de Reação/fisiologia , Estimulação Acústica
13.
Am J Primatol ; : e23637, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38741274

RESUMO

The phonetic potential of nonhuman primate vocal tracts has been the subject of considerable contention in recent literature. Here, the work of Philip Lieberman (1934-2022) is considered at length, and two research papers-both purported challenges to Lieberman's theoretical work-and a review of Lieberman's scientific legacy are critically examined. I argue that various aspects of Lieberman's research have been consistently misinterpreted in the literature. A paper by Fitch et al. overestimates the would-be "speech-ready" capacities of a rhesus macaque, and the data presented nonetheless supports Lieberman's principal position-that nonhuman primates cannot articulate the full extent of human speech sounds. The suggestion that no vocal anatomical evolution was necessary for the evolution of human speech (as spoken by all normally developing humans) is not supported by phonetic or anatomical data. The second challenge, by Boë et al., attributes vowel-like qualities of baboon calls to articulatory capacities based on audio data; I argue that such "protovocalic" properties likely result from disparate articulatory maneuvers compared to human speakers. A review of Lieberman's scientific legacy by Boë et al. ascribes a view of speech evolution (which the authors term "laryngeal descent theory") to Lieberman, which contradicts his writings. The present article documents a pattern of incorrect interpretations of Lieberman's theoretical work in recent literature. Finally, the apparent trend of vowel-like formant dispersions in great ape vocalization literature is discussed with regard to Lieberman's theoretical work. The review concludes that the "Lieberman account" of primate vocal tract phonetic capacities remains supported by research: the ready articulation of fully human speech reflects species-unique anatomy.

14.
Trends Hear ; 28: 23312165241239541, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38738337

RESUMO

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Assuntos
Cóclea , Percepção da Fala , Zumbido , Humanos , Cóclea/fisiopatologia , Zumbido/fisiopatologia , Zumbido/diagnóstico , Animais , Percepção da Fala/fisiologia , Hiperacusia/fisiopatologia , Ruído/efeitos adversos , Percepção Auditiva/fisiologia , Sinapses/fisiologia , Perda Auditiva Provocada por Ruído/fisiopatologia , Perda Auditiva Provocada por Ruído/diagnóstico , Percepção Sonora
15.
Trends Hear ; 28: 23312165241246596, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38738341

RESUMO

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Assuntos
Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos do Tronco Encefálico , Percepção da Fala , Humanos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Masculino , Feminino , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Adulto Jovem , Limiar Auditivo/fisiologia , Fatores de Tempo , Nervo Coclear/fisiologia , Voluntários Saudáveis
16.
Cogn Emot ; : 1-10, 2024 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-38764186

RESUMO

Older adults process emotional speech differently than young adults, relying less on prosody (tone) relative to semantics (words). This study aimed to elucidate the mechanisms underlying these age-related differences via an emotional speech-in-noise test. A sample of 51 young and 47 older adults rated spoken sentences with emotional content on both prosody and semantics, presented on the background of wideband speech-spectrum noise (sensory interference) or on the background of multi-talker babble (sensory/cognitive interference). The presence of wideband noise eliminated age-related differences in semantics but not in prosody when processing emotional speech. Conversely, the presence of babble resulted in the elimination of age-related differences across all measures. The results suggest that both sensory and cognitive-linguistic factors contribute to age-related changes in emotional speech processing. Because real world conditions typically involve noisy background, our results highlight the importance of testing under such conditions.

17.
Int J Audiol ; : 1-9, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38767343

RESUMO

OBJECTIVE: To investigate the benefit of remote-microphone (RM) systems for adults with sensory hearing loss. DESIGN: Speech recognition in quiet and in background noise was assessed. Participants with hearing loss underwent testing in two device conditions: hearing aids (HAs) alone and HAs with a RM. Normal hearing participants completed testing in the unaided condition. Predictive speech intelligibility modelling using the Hearing-Aid Speech Perception Index (HASPI) was also performed on recordings of HA processed test material. STUDY SAMPLE: Twenty adults with sensory hearing loss and 10 adults with normal hearing participated. RESULTS: Speech recognition for participants with hearing loss improved significantly when using the RM compared to HAs alone fit to Phonak's proprietary prescription. Largest benefits were observed in the most challenging conditions. At the lowest signal-to-noise ratio, participants with hearing loss using a RM outperformed normal hearing listeners. Predicted intelligibility scores produced by HASPI were strongly correlated to behavioural results. CONCLUSIONS: Adults using HAs who have significant difficulties understanding speech in noise will experience considerable benefits with the addition of a RM. Improvements in speech recognition were observed for all participants using RM systems, including those with relatively mild hearing loss. HASPI modelling reliably predicted the speech perception difficulties experienced.

18.
Artigo em Inglês | MEDLINE | ID: mdl-38767398

RESUMO

This study had two research objectives. The first was to examine age-related differences in the fluency of speech outputs, as prior research contains conflicting findings concerning whether older adults produce more disfluency than younger adults. The second was to examine cognitive individual differences, and their relationship with the production of disfluency. One hundred and fifty-four adults completed a story re-telling task, and a battery of cognitive measures. Results showed that younger adults produced more um's and fewer repetitions. For individual differences, results showed that inhibition and set shifting were related to the production of repetitions, and inhibition and working memory were related to uh production. Our results provide clarification about mixed findings with respect age and disfluency production. The individual differences provide clarification on theoretical arguments for disfluent speech in aging (e.g. Inhibition Deficit Hypothesis), and also sheds light on the role of executive functions in models of language production.

19.
Int J Audiol ; : 1-8, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38767554

RESUMO

OBJECTIVE: To investigate speech recognition in school-age children with early-childhood otitis media (OM) in conditions with noise or speech maskers with or without interaural differences. To also investigate the effects of three otologic history factors. DESIGN: Using headphone presentation, speech recognition thresholds (SRTs) were measured with simple sentences. As maskers, stationary speech-shaped noise (SSN) or two-talker running speech (TTS) were used. The stimuli were presented in a monaural and binaural condition (SSN) or a co-located and spatially separated condition (TTS). Based on the available medical records, overall OM duration, OM onset age, and time since the last OM episode were estimated. STUDY SAMPLE: 6-13-year-olds with a history of recurrent OM (N = 42) or without any ear diseases (N = 20) with normal tympanograms and audiograms at the time of testing. RESULTS: Mixed-model regression analyses that controlled for age showed poorer SRTs for the OM group (Δ-value = 0.84 dB, p = 0.009). These appeared driven by the spatially separated, binaural, and monaural conditions. The OM group showed large inter-individual differences, which were unrelated to the otologic history factors. CONCLUSIONS: Early-childhood OM can affect speech recognition in different acoustic conditions. The effects of the otologic history warrant further investigation.

20.
Artigo em Inglês | MEDLINE | ID: mdl-38738912

RESUMO

OBJECTIVE: To examine the clinical characteristics and auditory performance of patients with CHARGE syndrome following cochlear implantation (CI), as well as the prognostic factors affecting auditory outcomes. STUDY DESIGN: Retrospective cohort. SETTING: Tertiary academic center. METHODS: A retrospective chart review was performed in patients with CHARGE syndrome who underwent CI from 2007 to 2022. The category of auditory performance (CAP) score was used to assess the CI outcomes, and factors that may affect the speech outcomes were also evaluated. RESULTS: In 14 children with CHARGE syndrome, 22 CIs were performed, 6 unilaterally and 8 bilaterally. The mean age at CI was 25.9 months (range: 10-62). All patients had ear abnormalities and developmental delays, and cochlear nerve deficiency (CND) was present in all ears. At the last follow-up (mean: 49.6 months), the mean CAP score improved significantly compared to the preoperative measure (from 0.36 ± 0.81 to 3.21 ± 1.70, P = .001), with 6 patients (42.9%) achieving a CAP score of 4 points or higher. However, between the unilateral and bilateral CI groups, the final CAP score or change in CAP score was similar. Factors including age, coloboma, and CND did not significantly affect speech outcomes (all P > .05). CONCLUSION: Even though CHARGE syndrome features challenging anomalies, CI can be conducted safely and can offer effective contribution to significant speech improvement. Patients with CHARGE syndrome should be given the opportunity to undergo CI to maximize their audiological progress.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...