Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 812
Filtrar
1.
J Neurosci ; 44(28)2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38839302

RESUMO

Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we show that the human brain achieves this function through a mechanism termed entrainment. Through entrainment, electrophysiological brain activity maintains and continues contextual rhythms beyond their offset. Our experiment combined exposure to repetitive prosodic contours with the subsequent presentation of visual sentences that either matched or mismatched the duration of the preceding contour. During exposure to prosodic contours, we observed MEG coherence with the contours, which was source-localized to right-hemispheric auditory areas. During the processing of the visual targets, activity at the frequency of the preceding contour was still detectable in the MEG; yet sources shifted to the (left) frontal cortex, in line with a functional inheritance of the rhythmic acoustic context for prediction. Strikingly, when the target sentence was shorter than expected from the preceding contour, an omission response appeared in the evoked potential record. We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. In general, acoustic rhythms appear to endow language for employing the brain's electrophysiological mechanisms of temporal prediction.


Assuntos
Magnetoencefalografia , Percepção da Fala , Humanos , Masculino , Feminino , Adulto , Percepção da Fala/fisiologia , Adulto Jovem , Idioma , Compreensão/fisiologia , Estimulação Acústica/métodos , Fala/fisiologia , Estimulação Luminosa/métodos
2.
J Neurosci ; 43(48): 8189-8200, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-37793909

RESUMO

Spontaneous speech is produced in chunks called intonation units (IUs). IUs are defined by a set of prosodic cues and presumably occur in all human languages. Recent work has shown that across different grammatical and sociocultural conditions IUs form rhythms of ∼1 unit per second. Linguistic theory suggests that IUs pace the flow of information in the discourse. As a result, IUs provide a promising and hitherto unexplored theoretical framework for studying the neural mechanisms of communication. In this article, we identify a neural response unique to the boundary defined by the IU. We measured the EEG of human participants (of either sex), who listened to different speakers recounting an emotional life event. We analyzed the speech stimuli linguistically and modeled the EEG response at word offset using a GLM approach. We find that the EEG response to IU-final words differs from the response to IU-nonfinal words even when equating acoustic boundary strength. Finally, we relate our findings to the body of research on rhythmic brain mechanisms in speech processing. We study the unique contribution of IUs and acoustic boundary strength in predicting delta-band EEG. This analysis suggests that IU-related neural activity, which is tightly linked to the classic Closure Positive Shift (CPS), could be a time-locked component that captures the previously characterized delta-band neural speech tracking.SIGNIFICANCE STATEMENT Linguistic communication is central to human experience, and its neural underpinnings are a topic of much research in recent years. Neuroscientific research has benefited from studying human behavior in naturalistic settings, an endeavor that requires explicit models of complex behavior. Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation units. We reveal that the neural system is attuned to intonation units by explicitly modeling their impact on the EEG response beyond mere acoustics. To our understanding, this is the first time this is demonstrated in spontaneous speech under naturalistic conditions and under a theoretical framework that connects the prosodic chunking of speech, on the one hand, with the flow of information during communication, on the other.


Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Eletroencefalografia , Estimulação Acústica , Percepção da Fala/fisiologia , Idioma
3.
BMC Med ; 22(1): 121, 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486293

RESUMO

BACKGROUND: Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. METHODS: Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. RESULTS: Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. CONCLUSIONS: This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. TRIAL REGISTRATION: BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Criança , Humanos , Transtorno Autístico/diagnóstico , Fala , Transtorno do Espectro Autista/diagnóstico , Teorema de Bayes , Emoções/fisiologia , Acústica
4.
J Exp Child Psychol ; 241: 105859, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38325061

RESUMO

Infants as young as 14 months can track cross-situational statistics between sets of words and objects to acquire word-referent mappings. However, in naturalistic word learning situations, words and objects occur with a host of additional information, sometimes noisy, present in the environment. In this study, we tested the effect of this environmental variability on infants' word learning. Fourteen-month-old infants (N = 32) were given a cross-situational word learning task with additional gestural, prosodic, and distributional cues that occurred reliably or variably. In the reliable cue condition, infants were able to process this additional environmental information to learn the words, attending to the target object during test trials. But when the presence of these cues was variable, infants paid greater attention to the gestural cue during training and subsequently switched preference to attend more to novel word-object mappings rather than familiar ones at test. Environmental variation may be key to enhancing infants' exploration of new information.


Assuntos
Aprendizagem , Aprendizagem Verbal , Lactente , Humanos , Sinais (Psicologia)
5.
BMC Pediatr ; 24(1): 449, 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38997661

RESUMO

BACKGROUND: Language delay affects near- and long-term social communication and learning in toddlers, and, an increasing number of experts pay attention to it. The development of prosody discrimination is one of the earliest stages of language development in which key skills for later stages are mastered. Therefore, analyzing the relationship between brain discrimination of speech prosody and language abilities may provide an objective basis for the diagnosis and intervention of language delay. METHODS: In this study, all cases(n = 241) were enrolled from a tertiary women's hospital, from 2021 to 2022. We used functional near-infrared spectroscopy (fNIRS) to assess children's neural prosody discrimination abilities, and a Chinese communicative development inventory (CCDI) were used to evaluate their language abilities. RESULTS: Ninety-eight full-term and 108 preterm toddlers were included in the final analysis in phase I and II studies, respectively. The total CCDI screening abnormality rate was 9.2% for full-term and 34.3% for preterm toddlers. Full-term toddlers showed prosody discrimination ability in all channels except channel 5, while preterm toddlers showed prosody discrimination ability in channel 6 only. Multifactorial logistic regression analyses showed that prosody discrimination of the right angular gyrus (channel 3) had a statistically significant effect on language delay (odd ratio = 0.301, P < 0.05) in full-term toddlers. Random forest (RF) regression model presented that prosody discrimination reflected by channels and brain regions based on fNIRS data was an important parameter for predicting language delay in preterm toddlers, among which the prosody discrimination reflected by the right angular gyrus (channel 4) was the most important parameter. The area under the model Receiver operating characteristic (ROC) curve was 0.687. CONCLUSIONS: Neural prosody discrimination ability is positively associated with language development, assessment of brain prosody discrimination abilities through fNIRS could be used as an objective indicator for early identification of children with language delay in the future clinical application.


Assuntos
Transtornos do Desenvolvimento da Linguagem , Desenvolvimento da Linguagem , Espectroscopia de Luz Próxima ao Infravermelho , Humanos , Feminino , Masculino , Pré-Escolar , Transtornos do Desenvolvimento da Linguagem/diagnóstico , Lactente , Percepção da Fala/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
6.
Risk Anal ; 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38742599

RESUMO

People typically use verbal probability phrases when discussing risks ("It is likely that this treatment will work"), both in written and spoken communication. When speakers are uncertain about risks, they can nonverbally signal this uncertainty by using prosodic cues, such as a rising, question-like intonation or a filled pause ("uh"). We experimentally studied the effects of these two prosodic cues on the listener's perceived speaker certainty and numerical interpretation of spoken verbal probability phrases. Participants (N = 115) listened to various verbal probability phrases that were uttered with a rising or falling global intonation and with or without a filled pause before the probability phrase. For each phrase, they gave a point estimate of their numerical interpretation in percentages and indicated how certain they thought the speaker was about the correctness of the probability phrase. Speakers were perceived as least certain when the verbal probability phrases were spoken with both prosodic uncertainty cues. Interpretation of verbal probability phrases varied widely across participants, especially when rising intonation was produced by the speaker. Overall, high probability phrases (e.g., "very likely") were estimated as lower (and low probability phrases, such as "unlikely," as higher) when they were uttered with a rising intonation. The effects of filled pauses were less pronounced, as were the uncertainty effects for medium probability phrases (e.g., "probable"). These results stress the importance of nonverbal communication when verbally communicating risks and probabilities to people, for example, in the context of doctor-patient communication.

7.
Cogn Emot ; : 1-15, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39039748

RESUMO

Alexithymia is characterised by difficulties in identifying, recognising, and describing emotions. We studied alexithymia in the context of speech comprehension, specifically investigating the incongruent condition between prosody and the literal meaning of words in emotion-based discourse. In two experiments, participants were categorised as having high or low alexithymia scores based on the TAS-20 scale and listened to three-sentence narratives where the emotional prosody of a key phrase or a keyword was congruent or incongruent with its literal meaning. The incongruent condition resulted in slower reaction times and lower accuracy in recognition of emotions. This incongruence effect was also evident for individuals with high alexithymia, except for anger. They recognised anger as accurately in both congruent and incongruent conditions. Contrary to our hypothesis, however, individuals with high alexithymia did not show an overall difference in emotion recognition compared to the low alexithymia group. These findings highlight the nuanced relationship between emotional prosody and literal meaning, offering insights into how individuals with varying levels of alexithymia process emotional discourse. Understanding these dynamics has implications for both cognitive research and clinical practice, providing valuable perspectives on speech comprehension, especially in situations involving incongruence between prosody and word meaning.

8.
Cogn Emot ; : 1-11, 2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-38973172

RESUMO

While previous research has found an in-group advantage (IGA) favouring native speakers in emotional prosody perception over non-native speakers, the effects of semantics on emotional prosody perception remain unclear. This study investigated the effects of semantics on emotional prosody perception in Chinese words and sentences for native and non-native Chinese speakers. The critical manipulation was the congruence of prosodic (positive, negative) and semantic (positive, negative, and neutral) valence. Participants listened to a series of audio clips and judged whether the emotional prosody was positive or negative for each utterance. The results revealed an IGA effect: native speakers perceived emotional prosody more accurately and quickly than non-native speakers in Chinese words and sentences. Furthermore, a semantic congruence effect was observed in Chinese words, where both native and non-native speakers recognised emotional prosody more accurately in the semantic-prosody congruent condition than in the incongruent condition. However, in Chinese sentences, this congruence effect was only present for non-native speakers. Additionally, the IGA effect and semantic congruence effect on emotional prosody perception were influenced by prosody valence. These findings illuminate the role of semantics in emotional prosody perception, highlighting perceptual differences between native and non-native Chinese speakers.

9.
Int J Audiol ; : 1-8, 2024 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-39126382

RESUMO

OBJECTIVE: The emotional prosodic expression potential of children with cochlear implants is poorer than that of normal hearing peers. Though little is known about children with hearing aids. DESIGN: This study was set up to generate a better understanding of hearing aid users' prosodic identifiability compared to cochlear implant users and peers without hearing loss. STUDY SAMPLE: Emotional utterances of 75 Dutch speaking children (7 - 12 yr; 26 CHA, 23 CCI, 26 CNH) were gathered. Utterances were evaluated blindly by normal hearing Dutch listeners: 22 children and 9 adults (17 - 24 yrs) for resemblance to three emotions (happiness, sadness, anger). RESULTS: Emotions were more accurately recognised by adults than by children. Both children and adults correctly judged happiness significantly less often in CCI than in CNH. Also, adult listeners confused happiness with sadness more often in both CHA and CCI than in CNH. CONCLUSIONS: Children and adults are able to accurately evaluate the emotions expressed through speech by children with varying degrees of hearing loss, ranging from mild to profound, nearly as well as they can with typically hearing children. The favourable outcomes emphasise the resilience of children with hearing loss in developing effective emotional communication skills.

10.
Int J Audiol ; : 1-10, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38420783

RESUMO

OBJECTIVE: To evaluate whether a 500 pulses per second per channel (pps/ch) rate would provide non-inferior hearing performance compared to the 900 pps/ch rate in the Advanced Combination Encoder (ACE™) sound coding strategy. DESIGN: A repeated measures single-subject design was employed, wherein each subject served as their own control. All except one subject used 900 pps/ch at enrolment. After three weeks of using the alternative rate program, both programs were loaded into the sound processor for two more weeks of take-home use. Subjective performance, preference, words in quiet, sentences in babble, music quality, and fundamental frequency (F0) discrimination were assessed using a balanced design. STUDY SAMPLE: Data from 18 subjects were analysed, with complete datasets available for 17 subjects. RESULTS: Non-inferior performance on all clinical measures was shown for the lower rate program. Subjects' preference ratings were comparable for the programs, with 53% reporting no difference overall. When a preference was expressed, the 900 pps/ch condition was preferred more often. CONCLUSION: Reducing the stimulation rate from 900 pps/ch to 500 pps/ch did not compromise the hearing outcomes evaluated in this study. A lower pulse rate in future cochlear implants could reduce power consumption, allowing for smaller batteries and processors.

11.
Artigo em Inglês | MEDLINE | ID: mdl-39137279

RESUMO

BACKGROUND: Emotional prosody is the reflection of emotion types such as happiness, sadness, fear and anger in the speaker's tone of voice. Accurately perceiving, interpreting and expressing emotional prosody is an inseparable part of successful communication and social interaction. There are few studies on emotional prosody, which is crucial for communication, and the results of these studies have inconsistent information regarding age and gender. AIMS: The primary aim of this study is to assess the perception of emotional prosody in healthy ageing. The other aim is to examine the effects of variables such as age, gender, language and neurocognitive capacity on the prediction of emotional prosody recognition skills. METHODS AND PROCEDURES: Sixty-nine participants between the ages of 18-75 were included in the study. Participants were grouped as the young group aged 18-35 (n = 26), the middle-aged group aged 36-55 (n = 24) and the elderly group aged 56-75 (n = 19). Perceptual emotional prosody test, motor response time test, and neuropsychological test batteries were administered to the participants. Participants were asked to recognise the emotion in the sentences played on the computer. Natural (neutral, containing neither positive nor negative emotion), happy, angry, surprised and panic emotions were evaluated with sentences composed of pseudoword stimuli. RESULTS AND OUTCOMES: It was observed that the elderly group performed worse in recognising angry, panic, natural and happy emotions and in total recognition, which gives the correct recognition performance in recognition of all emotions. There was no age-related difference in recognition of the emotion of surprise. The women were more successful in recognising angry, panic, happy and total emotions compared to men. Age and Motor Reaction Time Test scores were found to be significant predictors in the emotional response time regression model. Age, language, attention and gender variables were found to have a significant effect on the regression model created for the success of total recognition of emotions (p < 0.05). CONCLUSIONS AND IMPLICATIONS: This was a novel study in which emotional prosody was assessed in the elderly by eliminating lexical-semantic cues related to emotional prosody and associating emotional prosody results with neuropsychiatric tests. All our findings revealed the importance of age for the perception of emotional prosody. In addition, the effects of cognitive functions such as attention, which decline with age, were found to be important. Therefore, it should not be forgotten that many factors contribute to the success of recognising emotional prosody correctly. In this context, clinicians should consider variables such as cognitive health and education when assessing the perception of emotional prosody in elderly individuals. WHAT THIS PAPER ADDS: What is already known on the subject Most of the studies compare young and old groups, and these studies evaluate the perception of emotional prosody by using sentences formed by observing the speech sounds, syllables, words and grammar rules in the vocabulary of the language. It has been reported that the perception of emotional prosody is lower, mostly in the elderly group, but there is inconsistent information in terms of age and gender. What this paper adds to existing knowledge Perceptual Prosody Recognition was evaluated with an experimental design in which sentence structures consisting of lexemes were used as stimuli and neurocognitive tests were included, taking into account the phonological and syntactic rules of language. This study was a novel study in diagnosing emotional prosody in terms of comparing different age groups and determining the factors affecting multidimensional emotional prosody, including neuropsychiatric features. What are the clinical implications of this work? All our findings revealed the importance of age for the perception of emotional prosody. In addition, it was determined that the effects of cognitive functions such as attention were important with age.

12.
Int J Lang Commun Disord ; 59(4): 1284-1295, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38152925

RESUMO

BACKGROUND: Down syndrome (DS) is a neurodevelopmental disorder of genetic origin with a cognitive-behavioural profile that distinguishes it from other syndromes. Within this profile, language difficulties are particularly marked, having been more studied in childhood than in adulthood. More generally, there is a paucity of research on the prosodic skills of individuals with DS, despite the relevance of this linguistic component for effective communication. AIMS: This study aimed to analyse, for the first time, the prosodic profile of Spanish-speaking teenagers and young adults with DS. We hypothesized that participants with DS would show significantly lower skills for the perception and production of prosodic functions and forms when compared to peers with intellectual disability (ID) of unknown origin. We also hypothesized that teenagers and young adults with DS would have better prosody perception than prosody production. METHODS & PROCEDURES: The final sample included in the study comprised 28 Spanish-speaking teenagers and young adults with DS and 29 teenagers and young adults with other ID matched on chronological age and nonverbal cognition. Their prosodic skills were tested by means of the Profiling Elements of Prosody for Speech and Communication battery. This battery allows for the separate evaluation of the comprehension and expression of the communicative functions of prosody and the discrimination and production of the forms that carry out such functions. OUTCOMES & RESULTS: In the prosody function tasks, which are the most adaptive tasks for the communicative process, we found, as expected, significantly lower scores on the turn-end, chunking, and focus tasks in the group with DS. However, no significant between-group differences were found for the affect tasks. Participants with DS also had significantly lower scores on the prosodic form tasks than participants with other ID. The results of the comparison between prosodic perception and production skills showed that a generalization about a better profile in comprehension versus production is not possible and that there is a dependence on the demands of the prosodic task undertaken. CONCLUSIONS & IMPLICATIONS: The findings contribute to the ongoing development of the language profile of teenagers and young adults with DS and imply the need to design prosodic intervention programs based on their specific profile. WHAT THIS PAPER ADDS: What is already known on the subject Prosody is a fundamental element of language, and its mastery affects the effectiveness of communication. However, research on prosody in Down syndrome (DS) that offers a holistic view from a psycholinguistic approach is still scarce. To date, studies focused on providing a detailed profile of prosodic skills in individuals with DS have been mainly conducted with a few English-speaking children. These studies have shown that the comprehension and production of prosody is severely impaired, especially when considering affect and focus production, as well as the perception and production of prosodic forms. During childhood, greater efficacy is found in prosody comprehension than in prosody expression. What this study adds This is the first study analysing the prosodic profile of a large group of Spanish-speaking teenagers and young adults with DS. Poorer performance in the perception and production of both prosodic functions and forms was observed in participants with DS compared to participants with intellectual disability of unknown origin matched on chronological age and nonverbal cognition. Unlike what has been previously found in children, teenagers and young adults with DS performed at the same level as the control group on the understanding and expression of affect through prosodic cues. Results also showed that a generalization about a better prosody profile in comprehension versus production is not possible. What are the clinical implications of this work? This study provides new data on the prosodic skills of Spanish-speaking teenagers and young adults with DS. Given the impact of prosody on effective communication and the pattern of difficulties found in this study, speech and language therapists working with individuals with DS should consider including prosodic skills in interventions not only in childhood but also in adolescence and adulthood. Therefore, the prosodic profile of strengths and weaknesses in individuals with DS found in this research has direct implications for clinical practice.


Assuntos
Síndrome de Down , Humanos , Síndrome de Down/psicologia , Síndrome de Down/complicações , Adolescente , Masculino , Adulto Jovem , Feminino , Percepção da Fala , Adulto , Testes de Linguagem , Idioma , Espanha
13.
Artigo em Inglês | MEDLINE | ID: mdl-38978277

RESUMO

BACKGROUND: Variability in the vocabulary outcomes of children with cochlear implants (CIs) is partially explained by child-directed speech (CDS) characteristics. Yet, relatively little is known about whether and how mothers adapt their lexical and prosodic characteristics to the child's hearing status (before and after implantation, and compared with groups with normal hearing (NH)) and how important they are in affecting vocabulary development in the first 12 months of hearing experience. AIMS: To investigate whether mothers of children with CIs produce CDS with similar lexical and prosodic characteristics compared with mothers of age-matched children with NH, and whether they modify these characteristics after implantation. In addition, to investigate whether mothers' CDS characteristics predict children's early vocabulary skills before and after implantation. METHODS & PROCEDURES: A total of 34 dyads (17 with NH, 17 with children with CIs; ages = 9-32 months), all acquiring Italian, were involved in the study. Mothers' and children's lexical quantity (tokens) and variety (types), mothers' prosodic characteristics (pitch range and variability), and children's vocabulary skills were assessed at two time points, corresponding to before and 1 year post-CI activation for children with CIs. Children's vocabulary skills were assessed using parent reports; lexical and prosodic characteristics were observed in semi-structured mother-child interactions. OUTCOMES & RESULTS: Results showed that mothers of children with CIs produced speech with similar lexical quantity but lower lexical variety, and with increased pitch range and variability, than mothers of children with NH. Mothers generally increased their lexical quantity and variety and their pitch range between sessions. Children with CIs showed reduced expressive vocabulary and lower lexical quantity and variety than their peers 12 months post-CI activation. Mothers' prosodic characteristics did not explain variance in children's vocabulary skills; their lexical characteristics predicted children's early vocabulary and lexical outcomes, especially in the NH group, but were not related to later language development. CONCLUSIONS & IMPLICATIONS: Our findings confirm previous studies on other languages and support the idea that the lexical characteristics of mothers' CDS have a positive effect on children's early measures of vocabulary development across hearing groups, whereas prosodic cues play a minor role. Greater input quantity and quality may assist children in the building of basic language model representations, whereas pitch cues may mainly serve attentional and emotional processes. Results emphasize the need for additional longitudinal studies investigating the input received from other figures surrounding the child and its role for children's language development. WHAT THIS PAPER ADDS: What is already known on the subject Mothers' CDS is thought to facilitate and support language acquisition in children with various language developmental trajectories, including children with CIs. Because children with CIs are at risk for language delays and have acoustic processing limitations, their mothers may have to produce a lexically simpler but prosodically richer input, compared to mothers of children with NH. Yet, the literature reports mixed findings and no study to our knowledge has concurrently addressed the role of mothers' lexical and prosodic characteristics for children's vocabulary development before implantation and in the first 12 months of hearing experience. What this study adds to the existing knowledge The study shows that mothers of children with CIs produce input of similar quantity but reduced variety, and with heightened pitch characteristics, compared to mothers of children with NH. There was also a general increase in mothers' lexical quantity and variety, and in their pitch range, between sessions. Only their lexical characteristics predicted children's early vocabulary skills. Their lexical variety predicted children's expressive vocabulary and lexical variety only in the NH group. What are the practical and clinical implications of this work? These findings expand our knowledge about the effects of maternal input and may contribute to the improvement of early family-centred intervention programmes for supporting language development in children with CIs.

14.
Neuropsychol Rehabil ; : 1-41, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38848458

RESUMO

It is unclear whether individuals with agrammatic aphasia have particularly disrupted prosody, or in fact have relatively preserved prosody they can use in a compensatory way. A targeted literature review was undertaken to examine the evidence regarding the capacity of speakers with agrammatic aphasia to produce prosody. The aim was to answer the question, how much prosody can a speaker "do" with limited syntax? The literature was systematically searched for articles examining the production of grammatical prosody in people with agrammatism, and yielded 16 studies that were ultimately included in this review. Participant inclusion criteria, spoken language tasks, and analysis procedures vary widely across studies. The evidence indicates that timing aspects of prosody are disrupted in people with agrammatic aphasia, while the use of pitch and amplitude cues is more likely to be preserved in this population. Some, but not all, of these timing differences may be attributable to motor speech programming deficits (AOS) rather than aphasia, as these conditions frequently co-occur. Many of the included studies do not address AOS and its possible role in any observed effects. Finally, the available evidence indicates that even speakers with severe aphasia show a degree of preserved prosody in functional communication.

15.
Sensors (Basel) ; 24(5)2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38475158

RESUMO

Since the advent of modern computing, researchers have striven to make the human-computer interface (HCI) as seamless as possible. Progress has been made on various fronts, e.g., the desktop metaphor (interface design) and natural language processing (input). One area receiving attention recently is voice activation and its corollary, computer-generated speech. Despite decades of research and development, most computer-generated voices remain easily identifiable as non-human. Prosody in speech has two primary components-intonation and rhythm-both often lacking in computer-generated voices. This research aims to enhance computer-generated text-to-speech algorithms by incorporating melodic and prosodic elements of human speech. This study explores a novel approach to add prosody by using machine learning, specifically an LSTM neural network, to add paralinguistic elements to a recorded or generated voice. The aim is to increase the realism of computer-generated text-to-speech algorithms, to enhance electronic reading applications, and improved artificial voices for those in need of artificial assistance to speak. A computer that is able to also convey meaning with a spoken audible announcement will also improve human-to-computer interactions. Applications for the use of such an algorithm may include improving high-definition audio codecs for telephony, renewing old recordings, and lowering barriers to the utilization of computing. This research deployed a prototype modular platform for digital speech improvement by analyzing and generalizing algorithms into a modular system through laboratory experiments to optimize combinations and performance in edge cases. The results were encouraging, with the LSTM-based encoder able to produce realistic speech. Further work will involve optimizing the algorithm and comparing its performance against other approaches.


Assuntos
Percepção da Fala , Fala , Fala/fisiologia , Percepção da Fala/fisiologia , Computadores , Aprendizado de Máquina
16.
Nord J Psychiatry ; 78(1): 30-36, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37812153

RESUMO

PURPOSE: Patients with schizophrenia have a flat and monotonous intonation. The purpose of the study was to find the variables of flat speech that differed in patients from those in healthy controls in Danish. MATERIALS AND METHODS: We compared drug-naïve schizophrenic patients 5 men, 13 women and 18 controls, aged 18-35 years, which had all grown up in Copenhagen speaking modern Danish standard (rigsdansk). We used two different tasks that lay different demands on the speaker to elicit spontaneous speech: a retelling of a film clip and telling a story from pictures in a book. A linguist used the computer program Praat to extract the phonetic linguistic parameters. RESULTS: We found different results for the two elicitation tasks (Task 1: a retelling of a film clip, task 2: telling a story from pictures in a book). There was higher intensity variation in task one in controls and higher pitch variation in task two in controls. We found a difference in intensity with higher intensity variation in the stresses in the controls in task one and fewer syllables between each stress in the controls. We also found higher F1 variation in task one and two in the patient group and higher F2 variation in the control group in both tasks. CONCLUSIONS: The results varied between patients and controls, but the demands also made a difference. Further research is needed to elucidate the possibilities of acoustic measures in diagnostics or linguistic treatment related to schizophrenia.


Assuntos
Esquizofrenia , Feminino , Humanos , Masculino , Acústica , Projetos Piloto , Esquizofrenia/diagnóstico , Acústica da Fala , Adolescente , Adulto Jovem , Adulto
17.
Phonetica ; 81(3): 321-349, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38522003

RESUMO

This study investigates the variation in phrase-final f0 movements found in dyadic unscripted conversations in Papuan Malay, an Eastern Indonesian language. This is done by a novel combination of exploratory and confirmatory classification techniques. In particular, this study investigates the linguistic factors that potentially drive f0 contour variation in phrase-final words produced in a naturalistic interactive dialogue task. To this end, a cluster analysis, manual labelling and random forest analysis are carried out to reveal the main sources of contour variation. These are: taking conversational interaction into account; turn transition, topic continuation, information structure (givenness and contrast), and context-independent properties of words such as word class, syllable structure, voicing and intrinsic f0. Results indicate that contour variation in Papuan Malay, in particular f0 direction and target level, is best explained by turn transitions between speakers, corroborating similar findings for related languages. The applied methods provide opportunities to further lower the threshold of incorporating intonation and prosody in the early stages of language documentation.


Assuntos
Idioma , Fonética , Humanos , Feminino , Masculino , Indonésia , Acústica da Fala , Adulto , Linguística , Medida da Produção da Fala
18.
J Child Lang ; 51(1): 217-233, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36756779

RESUMO

This study examines correlations between the prosody of infant-directed speech (IDS) and children's vocabulary size. We collected longitudinal speech data and vocabulary information from Dutch mother-child dyads with children aged 18 (N = 49) and 24 (N = 27) months old. We took speech context into consideration and distinguished between prosody when mothers introduce familiar vs. unfamiliar words to their children. The results show that IDS mean pitch predicts children's vocabulary growth between 18 and 24 months. In addition, the degree of prosodic modification when mothers introduce unfamiliar words to their children correlates with children's vocabulary growth during this period. These findings suggest that the prosody of IDS, especially in word-learning contexts, may serve linguistic purposes.


Assuntos
Fala , Vocabulário , Lactente , Feminino , Humanos , Pré-Escolar , Desenvolvimento da Linguagem , Mães , Aprendizagem Verbal
19.
J Psycholinguist Res ; 53(4): 56, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926243

RESUMO

The present paper examines how English native speakers produce scopally ambiguous sentences and how they make use of gestures and prosody for disambiguation. As a case in point, the participants in the present study produced the English negative quantifiers. They appear in two different positions as (1) The election of no candidate was a surprise (a: 'for those elected, none of them was a surprise'; b: 'no candidate was elected, and that was a surprise') and (2) no candidate's election was a surprise (a: 'for those elected, none of them was a surprise'; b: # 'no candidate was elected, and that was a surprise.' We were able to investigate the gesture production and the prosodic patterns of the positional effects (i.e., a-interpretation is available at two different positions in 1 and 2) and the interpretation effects (i.e., two different interpretations are available in the same position in 1). We discovered that the participants tended to launch more head shakes in the (a) interpretation despites the different positions, but more head nod/beat in the (b) interpretation. While there is not a difference in prosody of no in (a) and (b) interpretation in (1), there are pitch and durational differences between (a) interpretations in (1) and (2). This study points out the abstract similarities across languages such as Catalan and Spanish (Prieto et al. in Lingua 131:136-150, 2013. 10.1016/j.lingua.2013.02.008; Tubau et al. in Linguist Rev 32(1):115-142, 2015. 10.1515/tlr-2014-0016) in the gestural movements, and the meaning is crucial for gesture patterns. We emphasize that gesture patterns disambiguate ambiguous interpretation when prosody cannot do so.


Assuntos
Gestos , Psicolinguística , Humanos , Adulto , Masculino , Feminino , Fala/fisiologia , Idioma , Adulto Jovem
20.
Clin Linguist Phon ; 38(1): 64-81, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-36636014

RESUMO

This study aims to reveal dynamic changes in prosodic prominence patterns associated with Parkinson's disease (PD). To fulfill this purpose, the study proposes an exploratory methodology involving measuring a novel syllable-based prosody index (SPI) and performing functional principal component analyses (fPCAs) in a semi-automatic manner. First, SPI trajectories were collected from 31 speakers with PD before and after speech therapy and from 36 healthy controls. Then, the SPI trajectories were converted to continuous functions using B-splines. Finally, the functional SPIs were examined using fPCAs. The results showed that PD was associated with an increase of overall prominence for male speakers. The findings regarding higher prominence patterns in PD were supported by traditional phonetic measurements. For female speakers, however, there were no significant differences in prosodic prominence between speakers with PD and healthy controls. The results encourage to explore the proposed methodology also in analyses of other forms of atypical speech.


Assuntos
Doença de Parkinson , Humanos , Masculino , Feminino , Projetos Piloto , Doença de Parkinson/complicações , Medida da Produção da Fala , Fala , Distúrbios da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA