Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 682
Filter
Add more filters

Publication year range
1.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Article in English | MEDLINE | ID: mdl-30699049

ABSTRACT

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Subject(s)
Auditory Perception/physiology , Cochlear Implants , Critical Period, Psychological , Language Development , Animals , Auditory Perceptual Disorders/etiology , Brain/growth & development , Cochlear Implantation , Comprehension , Cues , Deafness/congenital , Deafness/physiopathology , Deafness/psychology , Deafness/surgery , Equipment Design , Humans , Language Development Disorders/etiology , Language Development Disorders/prevention & control , Learning/physiology , Neuronal Plasticity , Photic Stimulation
2.
Proc Natl Acad Sci U S A ; 120(42): e2300255120, 2023 10 17.
Article in English | MEDLINE | ID: mdl-37819985

ABSTRACT

Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.


Subject(s)
Speech , Temporal Lobe , Humans , Feedback , Acoustic Stimulation
3.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38741267

ABSTRACT

The role of the left temporoparietal cortex in speech production has been extensively studied during native language processing, proving crucial in controlled lexico-semantic retrieval under varying cognitive demands. Yet, its role in bilinguals, fluent in both native and second languages, remains poorly understood. Here, we employed continuous theta burst stimulation to disrupt neural activity in the left posterior middle-temporal gyrus (pMTG) and angular gyrus (AG) while Italian-Friulian bilinguals performed a cued picture-naming task. The task involved between-language (naming objects in Italian or Friulian) and within-language blocks (naming objects ["knife"] or associated actions ["cut"] in a single language) in which participants could either maintain (non-switch) or change (switch) instructions based on cues. During within-language blocks, cTBS over the pMTG entailed faster naming for high-demanding switch trials, while cTBS to the AG elicited slower latencies in low-demanding non-switch trials. No cTBS effects were observed in the between-language block. Our findings suggest a causal involvement of the left pMTG and AG in lexico-semantic processing across languages, with distinct contributions to controlled vs. "automatic" retrieval, respectively. However, they do not support the existence of shared control mechanisms within and between language(s) production. Altogether, these results inform neurobiological models of semantic control in bilinguals.


Subject(s)
Multilingualism , Parietal Lobe , Speech , Temporal Lobe , Transcranial Magnetic Stimulation , Humans , Male , Temporal Lobe/physiology , Female , Young Adult , Adult , Parietal Lobe/physiology , Speech/physiology , Cues
4.
Cereb Cortex ; 34(9)2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39329356

ABSTRACT

Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.


Subject(s)
Magnetoencephalography , Motor Cortex , Phonetics , Speech Perception , Humans , Male , Female , Motor Cortex/physiology , Young Adult , Speech Perception/physiology , Adult , Functional Laterality/physiology , Discrimination, Psychological/physiology , Acoustic Stimulation , Brain Mapping , Noise
5.
J Neurophysiol ; 2024 Oct 02.
Article in English | MEDLINE | ID: mdl-39356074

ABSTRACT

When speakers learn to change the way they produce a speech sound, how much does that learning generalize to other speech sounds? Past studies of speech sensorimotor learning have typically tested the generalization of a single transformation learned in a single context. Here, we investigate the ability of the speech motor system to generalize learning when multiple opposing sensorimotor transformations are learned in separate regions of the vowel space. We find that speakers adapt to a non-uniform "centralization" perturbation, learning to produce vowels with greater acoustic contrast, and that this adaptation generalizes to untrained vowels, which pattern like neighboring trained vowels and show increased contrast of a similar magnitude.

6.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39268584

ABSTRACT

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Subject(s)
Speech Perception , Speech , Humans , Speech Perception/physiology , Speech/physiology , Brain Mapping , Likelihood Functions , Motor Cortex/physiology , Cerebral Cortex/physiology , Cerebral Cortex/diagnostic imaging
7.
Dev Sci ; 27(1): e13428, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37381667

ABSTRACT

The prevalent "core phonological deficit" model of dyslexia proposes that the reading and spelling difficulties characterizing affected children stem from prior developmental difficulties in processing speech sound structure, for example, perceiving and identifying syllable stress patterns, syllables, rhymes and phonemes. Yet spoken word production appears normal. This suggests an unexpected disconnect between speech input and speech output processes. Here we investigated the output side of this disconnect from a speech rhythm perspective by measuring the speech amplitude envelope (AE) of multisyllabic spoken phrases. The speech AE contains crucial information regarding stress patterns, speech rate, tonal contrasts and intonational information. We created a novel computerized speech copying task in which participants copied aloud familiar spoken targets like "Aladdin." Seventy-five children with and without dyslexia were tested, some of whom were also receiving an oral intervention designed to enhance multi-syllabic processing. Similarity of the child's productions to the target AE was computed using correlation and mutual information metrics. Similarity of pitch contour, another acoustic cue to speech rhythm, was used for control analyses. Children with dyslexia were significantly worse at producing the multi-syllabic targets as indexed by both similarity metrics for computing the AE. However, children with dyslexia were not different from control children in producing pitch contours. Accordingly, the spoken production of multisyllabic phrases by children with dyslexia is atypical regarding the AE. Children with dyslexia may not appear to listeners to exhibit speech production difficulties because their pitch contours are intact. RESEARCH HIGHLIGHTS: Speech production of syllable stress patterns is atypical in children with dyslexia. Children with dyslexia are significantly worse at producing the amplitude envelope of multi-syllabic targets compared to both age-matched and reading-level-matched control children. No group differences were found for pitch contour production between children with dyslexia and age-matched control children. It may be difficult to detect speech output problems in dyslexia as pitch contours are relatively accurate.


Subject(s)
Dyslexia , Speech Perception , Child , Humans , Speech , Reading , Phonetics
8.
Brain ; 146(5): 1775-1790, 2023 05 02.
Article in English | MEDLINE | ID: mdl-36746488

ABSTRACT

Classical neural architecture models of speech production propose a single system centred on Broca's area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca's area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by a more ventral system posterior to Broca's area. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.


Subject(s)
Speech , Voice , Humans , Broca Area , Phonetics , Language
9.
Brain Topogr ; 37(5): 731-747, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38261272

ABSTRACT

Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.


Subject(s)
Magnetic Resonance Imaging , Speech , Support Vector Machine , Humans , Magnetic Resonance Imaging/methods , Male , Female , Adult , Young Adult , Speech/physiology , Brain Mapping/methods , Brain-Computer Interfaces , Sensorimotor Cortex/physiology , Sensorimotor Cortex/diagnostic imaging , Phonetics , Brain/physiology , Brain/diagnostic imaging
10.
Cereb Cortex ; 33(24): 11517-11525, 2023 12 09.
Article in English | MEDLINE | ID: mdl-37851854

ABSTRACT

Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.


Subject(s)
Speech Perception , Speech , Humans , Speech/physiology , Magnetic Resonance Imaging/methods , Brain Mapping/methods , Auditory Perception/physiology , Speech Perception/physiology
11.
Cereb Cortex ; 33(11): 6834-6851, 2023 05 24.
Article in English | MEDLINE | ID: mdl-36682885

ABSTRACT

Listeners predict upcoming information during language comprehension. However, how this ability is implemented is still largely unknown. Here, we tested the hypothesis proposing that language production mechanisms have a role in prediction. We studied 2 electroencephalographic correlates of predictability during speech comprehension-pre-target alpha-beta (8-30 Hz) power decrease and the post-target N400 event-related potential effect-in a population with impaired speech-motor control, i.e. adults who stutter (AWS), compared to typically fluent adults (TFA). Participants listened to sentences that could either constrain towards a target word or not, modulating its predictability. As a complementary task, participants also performed context-driven word production. Compared to TFA, AWS not only displayed atypical neural responses in production, but, critically, they showed a different pattern also in comprehension. Specifically, while TFA showed the expected pre-target power decrease, AWS showed a power increase in frontal regions, associated with speech-motor control. In addition, the post-target N400 effect was reduced for AWS with respect to TFA. Finally, we found that production and comprehension power changes were positively correlated in TFA, but not in AWS. Overall, the results support the idea that processes and neural structures prominently devoted to speech planning also support prediction during speech comprehension.


Subject(s)
Speech , Stuttering , Adult , Humans , Male , Female , Speech/physiology , Comprehension , Electroencephalography , Evoked Potentials
12.
Cereb Cortex ; 33(5): 2162-2173, 2023 02 20.
Article in English | MEDLINE | ID: mdl-35584784

ABSTRACT

Speech production relies on the interplay of different brain regions. Healthy aging leads to complex changes in speech processing and production. Here, we investigated how the whole-brain functional connectivity of healthy elderly individuals differs from that of young individuals. In total, 23 young (aged 24.6 ± 2.2 years) and 23 elderly (aged 64.1 ± 6.5 years) individuals performed a picture naming task during functional magnetic resonance imaging. We determined whole-brain functional connectivity matrices and used them to compute group averaged speech production networks. By including an emotionally neutral and an emotionally charged condition in the task, we characterized the speech production network during normal and emotionally challenged processing. Our data suggest that the speech production network of elderly healthy individuals is as efficient as that of young participants, but that it is more functionally segregated and more modularized. By determining key network regions, we showed that although complex network changes take place during healthy aging, the most important network regions remain stable. Furthermore, emotional distraction had a larger influence on the young group's network than on the elderly's. We demonstrated that, from the neural network perspective, elderly individuals have a higher capacity for emotion regulation based on their age-related network re-organization.


Subject(s)
Aging , Speech , Aged , Humans , Speech/physiology , Aging/physiology , Brain/physiology , Brain Mapping , Magnetic Resonance Imaging , Neural Pathways/physiology
13.
Adv Exp Med Biol ; 1455: 257-274, 2024.
Article in English | MEDLINE | ID: mdl-38918356

ABSTRACT

Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.


Subject(s)
Speech , Humans , Speech/physiology , Speech Perception/physiology , Speech Acoustics , Periodicity
14.
Article in English | MEDLINE | ID: mdl-39230308

ABSTRACT

BACKGROUND: Approximately 50% of all young children with a developmental language disorder (DLD) also have problems with speech production. Research on speech sound development and clinical diagnostics of speech production difficulties focuses mostly on accuracy; it relates children's phonological realizations to adult models. Contrarily to these relational analyses, independent analyses indicate the sounds and structures children produce irrespective of accuracy. Such analyses are likely to provide more insight into a child's phonological strengths and limitations, and may thus provide better leads for treatment. AIMS: Ram (1) To contribute to a more comprehensive overview of the speech sound development of young Dutch children with DLD by including independent and relational analyses, (2) to develop an independent measure to assess these children's speech production capacities; and (3) to examine the relation between independent and relational speech production measures for children with DLD. METHODS & PROCEDURES: We describe the syllable structures and sounds of words elicited in two picture-naming tasks of 82 children with DLD and speech production difficulties between ages 2;7 and 6;8. The children were divided into four age groups to examine developmental patterns in a cross-sectional manner. Overviews of the children's productions on both independent and relational measures are provided. We conducted a Spearman correlation analysis to examine the relation between accuracy and independent measures. OUTCOMES & RESULTS: The overviews show these children are able to produce a greater variety of syllable structures and consonants irrespective of target positions than they can produce correctly in targets. This is especially true for children below the age of 4;5. The data indicate that children with DLD have difficulty with the production of clusters, fricatives, liquids and the velar nasal (/ŋ/). Based on existing literature and our results, we designed a Dutch version of an independent measure of word complexity, originally designed for English (word complexity measure-WCM) in which word productions receive points for specific word, syllable and sound characteristics, irrespective of accuracy. We found a strong positive correlation between accuracy scores and scores on this independent measure. CONCLUSIONS & IMPLICATIONS: The results indicate that the use of independent measures, including the proposed WCM, complement traditional relational measures by indicating which sounds and syllable structures a child can produce (irrespective of correctness). Therefore, the proposed measure can be used to monitor the speech sound development of children with DLD and to better identify treatment goals, in combination with existing relational measures. WHAT THIS PAPER ADDS: What is already known on the subject Speech production skills can be assessed in different ways: (1) using analyses indicating the structures and sounds a child produces irrespective of accuracy, that is, performance analyses; and (2) using analyses indicating how the productions of a child relate to the adult targets, that is, accuracy analyses. In scientific research as well as in clinical practice the focus is most often on accuracy analyses. As a consequence, we do not know if children who do not improve in accuracy scores, improve in other phonological aspects that are not captured in these analyses, but can be captured by performance analyses. What this study adds to the existing knowledge The overviews show these children are able to produce a greater variety of syllable structures and consonants irrespective of target positions than they can produce correctly in targets. Consequently, adding performance analyses to existing accuracy analyses provides a more complete picture of a child's speech sound development. What are the potential or actual clinical implications of this work? We propose a Dutch version of a WCM, originally designed for English, in which word productions receive points for word structures, syllable structures and sounds, irrespective of accuracy. This measure may be used by Dutch clinicians to monitor the speech sound development of children with DLD and to formulate better treatment goals, in addition to accuracy measures that are already used.

15.
Child Care Health Dev ; 50(5): e13317, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39090030

ABSTRACT

OBJECTIVE: The LittlEARS® Early Speech Production Questionnaire (LEESPQ) was developed to provide professionals with valuable information about children's earliest language development and has been successfully validated in several languages. This study aimed to validate the Serbian version of the LEESPQ in typically developing children and compare the results with validation studies in other languages. METHODS: The English version of the LEESPQ was back-translated into Serbian. Parents completed the questionnaire in paper or electronic form either during the visit to the paediatric clinic or through personal contact. A total of 206 completed questionnaires were collected. Standardized expected values were calculated using a second-order polynomial model for children up to 18 months of age to create a norm curve for the Serbian language. The results were then used to determine confidence intervals, with the lower limit being the critical limit for typical speech-language development. Finally, the results were compared with German and Canadian English developmental norms. RESULTS: The Serbian LEESPQ version showed high homogeneity (r = .622) and internal consistency (α = .882), indicating that it almost exclusively measures speech production ability. No significant difference in total score was found between male and female infants (U = 4429.500, p = .090), so it can be considered a gender-independent questionnaire. The results of the comparison between Serbian and German (U = 645.500, p = .673) and Serbian and English norm curves (U = 652.000, p = .725) show that the LEESPQ can be applied to different population groups, regardless of linguistic, cultural or sociological differences. CONCLUSION: The LEESPQ is a valid, age-dependent and gender-independent questionnaire suitable for assessing early speech development in children aged from birth to 18 months.


Subject(s)
Language Development , Humans , Male , Female , Serbia , Infant , Surveys and Questionnaires/standards , Reproducibility of Results , Child Language , Speech Production Measurement , Translations
16.
Cleft Palate Craniofac J ; : 10556656241274242, 2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39363863

ABSTRACT

AIMS: To provide an overview of the Cleft Outcomes Research NETwork (CORNET) and the CORNET Speech and Surgery study. The study is (1) comparing speech outcomes and fistula rate between two common palate repair techniques, straight-line closure with intra-velar veloplasty (IVVP) and Furlow Double-Opposing Z-palatoplasty (Furlow Z-plasty); (2) summarizing practice variation in the utilization of early intervention speech-language (EI-SL) services; and (3) exploring the association between EI-SL services and speech outcomes. DESIGN: Prospective, longitudinal, observational, comparative effectiveness, multi-center. SITES: Twenty sites across the United States. PARTICIPANTS: One thousand two hundred forty-seven children with cleft palate with or without cleft lip (CP ± L). Children with submucous cleft palate or bilateral sensorineural severe to profound hearing loss were excluded from participation. INTERVENTIONS: Straight-line closure with IVVP or Furlow Z-plasty based on each surgeon's standard clinical protocol. MAIN OUTCOME MEASURE(S): The primary study outcome is perceptual ratings of hypernasality judged from speech samples collected at 3 years of age. Secondary outcomes are fistula rate, measures of speech production, and quality of life. The statistical analyses will include generalized estimating equations with propensity score weighting to address potential confounders. CURRENT PROGRESS: Recruitment was completed in February 2023; 80% of children have been retained to date. Five hundred sixty two children have completed their final 3-year speech assessment. Final study activities will end in early 2025. CONCLUSIONS: This study addresses long-standing questions related to the effectiveness of the two most common palatoplasty approaches and describes CORNET which provides an infrastructure that will streamline future studies in all areas of cleft care.

17.
Cleft Palate Craniofac J ; : 10556656231225575, 2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38408738

ABSTRACT

OBJECTIVE: To investigate speech development of children aged 5 and 10 years with repaired unilateral cleft lip and palate (UCLP) and identify speech characteristics when speech proficiency is not at 'peer level' at 10 years. Estimate how the number of speech therapy visits are related to speech proficiency at 10 years, and what factors are predictive of whether a child's speech proficiency at 10 years is at 'peer level' or not. DESIGN: Longitudinal complete datasets from the Scandcleft project. PARTICIPANTS: 320 children from nine cleft palate teams in five countries, operated on with one out of four surgical methods. INTERVENTIONS: Secondary velopharyngeal surgery (VP-surgery) and number of speech therapy visits (ST-visits), a proxy for speech intervention. MAIN OUTCOME MEASURES: 'Peer level' of percentage of consonants correct (PCC, > 91%) and the composite score of velopharyngeal competence (VPC-Sum, 0-1). RESULTS: Speech proficiency improved, with only 23% of the participants at 'peer level' at 5 years, compared to 56% at 10 years. A poorer PCC score was the most sensitive marker for the 44% below 'peer level' at 10-year-of-age. The best predictor of 'peer level' speech proficiency at 10 years was speech proficiency at 5 years. A high number of ST-visits received did not improve the probability of achieving 'peer level' speech, and many children seemed to have received excessive amounts of ST-visits without substantial improvement. CONCLUSIONS: It is important to strive for speech at 'peer level' before age 5. Criteria for speech therapy intervention and for methods used needs to be evidence-based.

18.
Cleft Palate Craniofac J ; 61(5): 844-853, 2024 May.
Article in English | MEDLINE | ID: mdl-36594527

ABSTRACT

OBJECTIVE: The objective of this study was to use data from Smile Train's global partner hospital network to identify patient characteristics that increase odds of fistula and postoperative speech outcomes. DESIGN: Multi-institution, retrospective review of Smile Train Express database. SETTING: 1110 Smile Train partner hospitals. PATIENTS/PARTICIPANTS: 2560 patients. INTERVENTIONS: N/A. MAIN OUTCOME MEASURE(S): Fistula occurrence, nasal emission, audible nasal emission with amplification (through a straw or tube) only, nasal rustle/turbulence, consistent nasal emission, consistent nasal emission due to velopharyngeal dysfunction, rating of resonance, rating of intelligibility, recommendation for further velopharyngeal dysfunction assessment, and follow-up velopharyngeal dysfunction surgery. RESULTS: The patients were 46.6% female and 27.5% underweight by WHO standards. Average age at palatoplasty was 24.7 ± 0.5 months and at speech assessment was 6.8 ± 0.1 years. Underweight patients had higher incidence of hypernasality and decreased speech intelligibility. Palatoplasty when under 6 months or over 18 months of age had higher rates of affected nasality, intelligibility, and fistula formation. The same findings were seen in Central/South American and African patients, in addition to increased velopharyngeal dysfunction and fistula surgery compared to Asian patients. Palatoplasty technique primarily involved one-stage midline repair. CONCLUSIONS: Age and nutrition status were significant predictors of speech outcomes and fistula occurrence following palatoplasty. Outcomes were also significantly impacted by location, demonstrating the need to cultivate longitudinal initiatives to reduce regional disparities. These results underscore the importance of Smile Train's continual expansion of accessible surgical intervention, nutritional support, and speech-language care.


Subject(s)
Cleft Palate , Fistula , Velopharyngeal Insufficiency , Humans , Female , Male , Cleft Palate/surgery , Cleft Palate/complications , Thinness/complications , Treatment Outcome , Speech , Retrospective Studies , Speech Intelligibility , Palate, Soft/surgery
19.
Cleft Palate Craniofac J ; : 10556656241242699, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38629137

ABSTRACT

OBJECTIVE: The inaugural Cleft Summit aimed to unite experts and foster interdisciplinary collaboration, seeking a collective understanding of velopharyngeal insufficiency (VPI) management. DESIGN: An interactive debate and conversation between a multidisciplinary cleft care team on VPI management. SETTING: A two-hour discussion within a four-day comprehensive cleft care workshop (CCCW). PARTICIPANTS: Thirty-two global leaders from various cleft disciplines. INTERVENTIONS: Cleft Summit that allows for meaningful interdisciplinary collaboration and knowledge exchange. MAIN OUTCOME MEASURES: Ability to reach consensus on a unified statement for VPI management. RESULTS: Participants agreed that a patient with significant VPI and a dynamic velum should first receive a surgery that lengthens the velum to optimize patient outcome. A global, multicenter prospective study should be done to test this hypothesis. CONCLUSION: The 1st Cleft Summit successfully distilled global expertise into actionable best-practice guidelines through iterative discussions, fostering interdisciplinary collaboration and paving the way for a transformative multi-center prospective study on VPI care.

20.
Cogn Process ; 25(1): 89-106, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37995082

ABSTRACT

Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.


Subject(s)
Laughter , Humans , Laughter/physiology , Laughter/psychology , Communication , Empathy , Acoustics , Sound
SELECTION OF CITATIONS
SEARCH DETAIL