Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 680
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-30699049

RESUMO

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Assuntos
Percepção Auditiva/fisiologia , Implantes Cocleares , Período Crítico Psicológico , Desenvolvimento da Linguagem , Animais , Transtornos da Percepção Auditiva/etiologia , Encéfalo/crescimento & desenvolvimento , Implante Coclear , Compreensão , Sinais (Psicologia) , Surdez/congênito , Surdez/fisiopatologia , Surdez/psicologia , Surdez/cirurgia , Desenho de Equipamento , Humanos , Transtornos do Desenvolvimento da Linguagem/etiologia , Transtornos do Desenvolvimento da Linguagem/prevenção & controle , Aprendizagem/fisiologia , Plasticidade Neuronal , Estimulação Luminosa
2.
Proc Natl Acad Sci U S A ; 120(42): e2300255120, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37819985

RESUMO

Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.


Assuntos
Fala , Lobo Temporal , Humanos , Retroalimentação , Estimulação Acústica
3.
Cereb Cortex ; 34(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38741267

RESUMO

The role of the left temporoparietal cortex in speech production has been extensively studied during native language processing, proving crucial in controlled lexico-semantic retrieval under varying cognitive demands. Yet, its role in bilinguals, fluent in both native and second languages, remains poorly understood. Here, we employed continuous theta burst stimulation to disrupt neural activity in the left posterior middle-temporal gyrus (pMTG) and angular gyrus (AG) while Italian-Friulian bilinguals performed a cued picture-naming task. The task involved between-language (naming objects in Italian or Friulian) and within-language blocks (naming objects ["knife"] or associated actions ["cut"] in a single language) in which participants could either maintain (non-switch) or change (switch) instructions based on cues. During within-language blocks, cTBS over the pMTG entailed faster naming for high-demanding switch trials, while cTBS to the AG elicited slower latencies in low-demanding non-switch trials. No cTBS effects were observed in the between-language block. Our findings suggest a causal involvement of the left pMTG and AG in lexico-semantic processing across languages, with distinct contributions to controlled vs. "automatic" retrieval, respectively. However, they do not support the existence of shared control mechanisms within and between language(s) production. Altogether, these results inform neurobiological models of semantic control in bilinguals.


Assuntos
Multilinguismo , Lobo Parietal , Fala , Lobo Temporal , Estimulação Magnética Transcraniana , Humanos , Masculino , Lobo Temporal/fisiologia , Feminino , Adulto Jovem , Adulto , Lobo Parietal/fisiologia , Fala/fisiologia , Sinais (Psicologia)
4.
Dev Sci ; 27(1): e13428, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37381667

RESUMO

The prevalent "core phonological deficit" model of dyslexia proposes that the reading and spelling difficulties characterizing affected children stem from prior developmental difficulties in processing speech sound structure, for example, perceiving and identifying syllable stress patterns, syllables, rhymes and phonemes. Yet spoken word production appears normal. This suggests an unexpected disconnect between speech input and speech output processes. Here we investigated the output side of this disconnect from a speech rhythm perspective by measuring the speech amplitude envelope (AE) of multisyllabic spoken phrases. The speech AE contains crucial information regarding stress patterns, speech rate, tonal contrasts and intonational information. We created a novel computerized speech copying task in which participants copied aloud familiar spoken targets like "Aladdin." Seventy-five children with and without dyslexia were tested, some of whom were also receiving an oral intervention designed to enhance multi-syllabic processing. Similarity of the child's productions to the target AE was computed using correlation and mutual information metrics. Similarity of pitch contour, another acoustic cue to speech rhythm, was used for control analyses. Children with dyslexia were significantly worse at producing the multi-syllabic targets as indexed by both similarity metrics for computing the AE. However, children with dyslexia were not different from control children in producing pitch contours. Accordingly, the spoken production of multisyllabic phrases by children with dyslexia is atypical regarding the AE. Children with dyslexia may not appear to listeners to exhibit speech production difficulties because their pitch contours are intact. RESEARCH HIGHLIGHTS: Speech production of syllable stress patterns is atypical in children with dyslexia. Children with dyslexia are significantly worse at producing the amplitude envelope of multi-syllabic targets compared to both age-matched and reading-level-matched control children. No group differences were found for pitch contour production between children with dyslexia and age-matched control children. It may be difficult to detect speech output problems in dyslexia as pitch contours are relatively accurate.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Fala , Leitura , Fonética
5.
Brain ; 146(5): 1775-1790, 2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-36746488

RESUMO

Classical neural architecture models of speech production propose a single system centred on Broca's area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca's area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by a more ventral system posterior to Broca's area. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.


Assuntos
Fala , Voz , Humanos , Área de Broca , Fonética , Idioma
6.
Brain Topogr ; 2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38261272

RESUMO

Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.

7.
Cereb Cortex ; 33(5): 2162-2173, 2023 02 20.
Artigo em Inglês | MEDLINE | ID: mdl-35584784

RESUMO

Speech production relies on the interplay of different brain regions. Healthy aging leads to complex changes in speech processing and production. Here, we investigated how the whole-brain functional connectivity of healthy elderly individuals differs from that of young individuals. In total, 23 young (aged 24.6 ± 2.2 years) and 23 elderly (aged 64.1 ± 6.5 years) individuals performed a picture naming task during functional magnetic resonance imaging. We determined whole-brain functional connectivity matrices and used them to compute group averaged speech production networks. By including an emotionally neutral and an emotionally charged condition in the task, we characterized the speech production network during normal and emotionally challenged processing. Our data suggest that the speech production network of elderly healthy individuals is as efficient as that of young participants, but that it is more functionally segregated and more modularized. By determining key network regions, we showed that although complex network changes take place during healthy aging, the most important network regions remain stable. Furthermore, emotional distraction had a larger influence on the young group's network than on the elderly's. We demonstrated that, from the neural network perspective, elderly individuals have a higher capacity for emotion regulation based on their age-related network re-organization.


Assuntos
Envelhecimento , Fala , Idoso , Humanos , Fala/fisiologia , Envelhecimento/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Vias Neurais/fisiologia
8.
Cereb Cortex ; 33(11): 6834-6851, 2023 05 24.
Artigo em Inglês | MEDLINE | ID: mdl-36682885

RESUMO

Listeners predict upcoming information during language comprehension. However, how this ability is implemented is still largely unknown. Here, we tested the hypothesis proposing that language production mechanisms have a role in prediction. We studied 2 electroencephalographic correlates of predictability during speech comprehension-pre-target alpha-beta (8-30 Hz) power decrease and the post-target N400 event-related potential effect-in a population with impaired speech-motor control, i.e. adults who stutter (AWS), compared to typically fluent adults (TFA). Participants listened to sentences that could either constrain towards a target word or not, modulating its predictability. As a complementary task, participants also performed context-driven word production. Compared to TFA, AWS not only displayed atypical neural responses in production, but, critically, they showed a different pattern also in comprehension. Specifically, while TFA showed the expected pre-target power decrease, AWS showed a power increase in frontal regions, associated with speech-motor control. In addition, the post-target N400 effect was reduced for AWS with respect to TFA. Finally, we found that production and comprehension power changes were positively correlated in TFA, but not in AWS. Overall, the results support the idea that processes and neural structures prominently devoted to speech planning also support prediction during speech comprehension.


Assuntos
Fala , Gagueira , Adulto , Humanos , Masculino , Feminino , Fala/fisiologia , Compreensão , Eletroencefalografia , Potenciais Evocados
9.
Cereb Cortex ; 33(24): 11517-11525, 2023 12 09.
Artigo em Inglês | MEDLINE | ID: mdl-37851854

RESUMO

Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.


Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Imageamento por Ressonância Magnética/métodos , Mapeamento Encefálico/métodos , Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia
10.
Adv Exp Med Biol ; 1455: 257-274, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38918356

RESUMO

Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.


Assuntos
Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Acústica da Fala , Periodicidade
11.
Artigo em Inglês | MEDLINE | ID: mdl-39230308

RESUMO

BACKGROUND: Approximately 50% of all young children with a developmental language disorder (DLD) also have problems with speech production. Research on speech sound development and clinical diagnostics of speech production difficulties focuses mostly on accuracy; it relates children's phonological realizations to adult models. Contrarily to these relational analyses, independent analyses indicate the sounds and structures children produce irrespective of accuracy. Such analyses are likely to provide more insight into a child's phonological strengths and limitations, and may thus provide better leads for treatment. AIMS: Ram (1) To contribute to a more comprehensive overview of the speech sound development of young Dutch children with DLD by including independent and relational analyses, (2) to develop an independent measure to assess these children's speech production capacities; and (3) to examine the relation between independent and relational speech production measures for children with DLD. METHODS & PROCEDURES: We describe the syllable structures and sounds of words elicited in two picture-naming tasks of 82 children with DLD and speech production difficulties between ages 2;7 and 6;8. The children were divided into four age groups to examine developmental patterns in a cross-sectional manner. Overviews of the children's productions on both independent and relational measures are provided. We conducted a Spearman correlation analysis to examine the relation between accuracy and independent measures. OUTCOMES & RESULTS: The overviews show these children are able to produce a greater variety of syllable structures and consonants irrespective of target positions than they can produce correctly in targets. This is especially true for children below the age of 4;5. The data indicate that children with DLD have difficulty with the production of clusters, fricatives, liquids and the velar nasal (/ŋ/). Based on existing literature and our results, we designed a Dutch version of an independent measure of word complexity, originally designed for English (word complexity measure-WCM) in which word productions receive points for specific word, syllable and sound characteristics, irrespective of accuracy. We found a strong positive correlation between accuracy scores and scores on this independent measure. CONCLUSIONS & IMPLICATIONS: The results indicate that the use of independent measures, including the proposed WCM, complement traditional relational measures by indicating which sounds and syllable structures a child can produce (irrespective of correctness). Therefore, the proposed measure can be used to monitor the speech sound development of children with DLD and to better identify treatment goals, in combination with existing relational measures. WHAT THIS PAPER ADDS: What is already known on the subject Speech production skills can be assessed in different ways: (1) using analyses indicating the structures and sounds a child produces irrespective of accuracy, that is, performance analyses; and (2) using analyses indicating how the productions of a child relate to the adult targets, that is, accuracy analyses. In scientific research as well as in clinical practice the focus is most often on accuracy analyses. As a consequence, we do not know if children who do not improve in accuracy scores, improve in other phonological aspects that are not captured in these analyses, but can be captured by performance analyses. What this study adds to the existing knowledge The overviews show these children are able to produce a greater variety of syllable structures and consonants irrespective of target positions than they can produce correctly in targets. Consequently, adding performance analyses to existing accuracy analyses provides a more complete picture of a child's speech sound development. What are the potential or actual clinical implications of this work? We propose a Dutch version of a WCM, originally designed for English, in which word productions receive points for word structures, syllable structures and sounds, irrespective of accuracy. This measure may be used by Dutch clinicians to monitor the speech sound development of children with DLD and to formulate better treatment goals, in addition to accuracy measures that are already used.

12.
Child Care Health Dev ; 50(5): e13317, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39090030

RESUMO

OBJECTIVE: The LittlEARS® Early Speech Production Questionnaire (LEESPQ) was developed to provide professionals with valuable information about children's earliest language development and has been successfully validated in several languages. This study aimed to validate the Serbian version of the LEESPQ in typically developing children and compare the results with validation studies in other languages. METHODS: The English version of the LEESPQ was back-translated into Serbian. Parents completed the questionnaire in paper or electronic form either during the visit to the paediatric clinic or through personal contact. A total of 206 completed questionnaires were collected. Standardized expected values were calculated using a second-order polynomial model for children up to 18 months of age to create a norm curve for the Serbian language. The results were then used to determine confidence intervals, with the lower limit being the critical limit for typical speech-language development. Finally, the results were compared with German and Canadian English developmental norms. RESULTS: The Serbian LEESPQ version showed high homogeneity (r = .622) and internal consistency (α = .882), indicating that it almost exclusively measures speech production ability. No significant difference in total score was found between male and female infants (U = 4429.500, p = .090), so it can be considered a gender-independent questionnaire. The results of the comparison between Serbian and German (U = 645.500, p = .673) and Serbian and English norm curves (U = 652.000, p = .725) show that the LEESPQ can be applied to different population groups, regardless of linguistic, cultural or sociological differences. CONCLUSION: The LEESPQ is a valid, age-dependent and gender-independent questionnaire suitable for assessing early speech development in children aged from birth to 18 months.


Assuntos
Desenvolvimento da Linguagem , Humanos , Masculino , Feminino , Sérvia , Lactente , Inquéritos e Questionários/normas , Reprodutibilidade dos Testes , Linguagem Infantil , Medida da Produção da Fala , Traduções
13.
Cleft Palate Craniofac J ; : 10556656231225575, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38408738

RESUMO

OBJECTIVE: To investigate speech development of children aged 5 and 10 years with repaired unilateral cleft lip and palate (UCLP) and identify speech characteristics when speech proficiency is not at 'peer level' at 10 years. Estimate how the number of speech therapy visits are related to speech proficiency at 10 years, and what factors are predictive of whether a child's speech proficiency at 10 years is at 'peer level' or not. DESIGN: Longitudinal complete datasets from the Scandcleft project. PARTICIPANTS: 320 children from nine cleft palate teams in five countries, operated on with one out of four surgical methods. INTERVENTIONS: Secondary velopharyngeal surgery (VP-surgery) and number of speech therapy visits (ST-visits), a proxy for speech intervention. MAIN OUTCOME MEASURES: 'Peer level' of percentage of consonants correct (PCC, > 91%) and the composite score of velopharyngeal competence (VPC-Sum, 0-1). RESULTS: Speech proficiency improved, with only 23% of the participants at 'peer level' at 5 years, compared to 56% at 10 years. A poorer PCC score was the most sensitive marker for the 44% below 'peer level' at 10-year-of-age. The best predictor of 'peer level' speech proficiency at 10 years was speech proficiency at 5 years. A high number of ST-visits received did not improve the probability of achieving 'peer level' speech, and many children seemed to have received excessive amounts of ST-visits without substantial improvement. CONCLUSIONS: It is important to strive for speech at 'peer level' before age 5. Criteria for speech therapy intervention and for methods used needs to be evidence-based.

14.
Cleft Palate Craniofac J ; 61(5): 844-853, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-36594527

RESUMO

OBJECTIVE: The objective of this study was to use data from Smile Train's global partner hospital network to identify patient characteristics that increase odds of fistula and postoperative speech outcomes. DESIGN: Multi-institution, retrospective review of Smile Train Express database. SETTING: 1110 Smile Train partner hospitals. PATIENTS/PARTICIPANTS: 2560 patients. INTERVENTIONS: N/A. MAIN OUTCOME MEASURE(S): Fistula occurrence, nasal emission, audible nasal emission with amplification (through a straw or tube) only, nasal rustle/turbulence, consistent nasal emission, consistent nasal emission due to velopharyngeal dysfunction, rating of resonance, rating of intelligibility, recommendation for further velopharyngeal dysfunction assessment, and follow-up velopharyngeal dysfunction surgery. RESULTS: The patients were 46.6% female and 27.5% underweight by WHO standards. Average age at palatoplasty was 24.7 ± 0.5 months and at speech assessment was 6.8 ± 0.1 years. Underweight patients had higher incidence of hypernasality and decreased speech intelligibility. Palatoplasty when under 6 months or over 18 months of age had higher rates of affected nasality, intelligibility, and fistula formation. The same findings were seen in Central/South American and African patients, in addition to increased velopharyngeal dysfunction and fistula surgery compared to Asian patients. Palatoplasty technique primarily involved one-stage midline repair. CONCLUSIONS: Age and nutrition status were significant predictors of speech outcomes and fistula occurrence following palatoplasty. Outcomes were also significantly impacted by location, demonstrating the need to cultivate longitudinal initiatives to reduce regional disparities. These results underscore the importance of Smile Train's continual expansion of accessible surgical intervention, nutritional support, and speech-language care.


Assuntos
Fissura Palatina , Fístula , Insuficiência Velofaríngea , Humanos , Feminino , Masculino , Fissura Palatina/cirurgia , Fissura Palatina/complicações , Magreza/complicações , Resultado do Tratamento , Fala , Estudos Retrospectivos , Inteligibilidade da Fala , Palato Mole/cirurgia
15.
Cleft Palate Craniofac J ; : 10556656241242699, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38629137

RESUMO

OBJECTIVE: The inaugural Cleft Summit aimed to unite experts and foster interdisciplinary collaboration, seeking a collective understanding of velopharyngeal insufficiency (VPI) management. DESIGN: An interactive debate and conversation between a multidisciplinary cleft care team on VPI management. SETTING: A two-hour discussion within a four-day comprehensive cleft care workshop (CCCW). PARTICIPANTS: Thirty-two global leaders from various cleft disciplines. INTERVENTIONS: Cleft Summit that allows for meaningful interdisciplinary collaboration and knowledge exchange. MAIN OUTCOME MEASURES: Ability to reach consensus on a unified statement for VPI management. RESULTS: Participants agreed that a patient with significant VPI and a dynamic velum should first receive a surgery that lengthens the velum to optimize patient outcome. A global, multicenter prospective study should be done to test this hypothesis. CONCLUSION: The 1st Cleft Summit successfully distilled global expertise into actionable best-practice guidelines through iterative discussions, fostering interdisciplinary collaboration and paving the way for a transformative multi-center prospective study on VPI care.

16.
Cogn Process ; 25(1): 89-106, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37995082

RESUMO

Laughter is one of the most common non-verbal features; however, contrary to the previous assumptions, it may also act as signals of bonding, affection, emotional regulation agreement or empathy (Scott et al. Trends Cogn Sci 18:618-620, 2014). Although previous research agrees that laughter does not form a uniform group in many respects, different types of laughter have been defined differently by individual research. Due to the various definitions of laughter, as well as their different methodologies, the results of the previous examinations were often contradictory. The analysed laughs were often recorded in controlled, artificial situations; however, less is known about laughs from social conversations. Thus, the aim of the present study is to examine the acoustic realisation, as well as the automatic classification of laughter that appear in human interactions according to whether listeners consider them to be voluntary or involuntary. The study consists of three parts using a multi-method approach. Firstly, in the perception task, participants had to decide whether the given laughter seemed to be rather involuntary or voluntary. In the second part of the experiment, those sound samples of laughter were analysed that were considered to be voluntary or involuntary by at least 66.6% of listeners. In the third part, all the sound samples were grouped into the two categories by an automatic classifier. The results showed that listeners were able to distinguish laughter extracted from spontaneous conversation into two different types, as well as the distinction was possible on the basis of the automatic classification. In addition, there were significant differences in acoustic parameters between the two groups of laughter. The results of the research showed that, although the distinction between voluntary and involuntary laughter categories appears based on the analysis of everyday, spontaneous conversations in terms of the perception and acoustic features, there is often an overlap in the acoustic features of voluntary and involuntary laughter. The results will enrich our previous knowledge of laughter and help to describe and explore the diversity of non-verbal vocalisations.


Assuntos
Riso , Humanos , Riso/fisiologia , Riso/psicologia , Comunicação , Empatia , Acústica , Som
17.
Phonetica ; 81(1): 81-117, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-37814341

RESUMO

Referents with a topical or focused status have been shown to be preferable antecedents in real-time resolution of pronouns. However, it remains unclear regarding whether topicality and focus compete for prominence when co-present in the same narrative, and if so, how differential prominence affects prosodic realization of a subsequent pronoun. Building upon the general understanding that stress on pronouns signals an unusual, less accessible interpretation, we take advantage of the conditional bi-clausal construction in conjunction with homophonic 3rd person pronouns in Chinese. We manipulated the information status of two referents that were introduced into a six-clause narrative in succession, specifically (i) Topic and (ii) Focus, and also (iii) the Reference of the Pronoun (either the first or second referent). Our acoustic analyses showed that pronouns were produced with higher F0s when the first referent was topicalized than when it was not topicalized under conditions where the second referent was focused. Pronouns referring back to the first referent were uttered longer when the referent was not topicalized than when it was topicalized. These results suggest accessibility statuses of referents vary dynamically in response to different prominence-lending cues, and these variations can be captured by the prosodic features of a following pronoun.


Assuntos
Idioma , Narração
18.
Folia Phoniatr Logop ; 76(2): 109-126, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37497950

RESUMO

INTRODUCTION: Research on voice onset time (VOT) production of stops in children with CI versus NH has reported conflicting results. Effects of age and place of articulation on VOT have not been examined for children with CI. The purpose of this study was to examine VOT production by Greek-speaking children with CI in comparison to NH controls, with a focus on the effects of age, type of stimuli, and place of articulation. METHODS: Participants were 24 children with CI aged from 2;8 to 13;3 years and 24 age- and gender-matched children with NH. Words were elicited via a picture-naming task, and nonwords were elicited via a fast mapping procedure. RESULTS: For voiced stops, children with CI showed longer VOT than children with NH, whereas VOT for voiceless stops was similar to that of NH peers. Also, in both voiced and voiceless stops, the VOT differed as a function of age and place of articulation across groups. Differences as a function of stimulus type were only noted for voiced stops across groups. CONCLUSIONS: For the voiced stop consonants, which demand more articulatory effort, VOT production in children with CI was longer than in children with NH. For the voiceless stop consonants, VOT production in children with CI is acquired at a young age.


Assuntos
Implantes Cocleares , Voz , Criança , Humanos , Grécia , Fonética , Audição
19.
Folia Phoniatr Logop ; : 1-14, 2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-39033740

RESUMO

INTRODUCTION: The predominant alterations in voice of patients with multiple sclerosis (MS) are phonatory instability, vocal asthenia and roughness, shortness of breath, hypophonia, and hypernasality. However, research on alterations of acoustic parameters has few studies and disparate results. The objective of this study was to investigate voice disturbances in patients with MS, both with objective measures (analysis of biomechanical) and subjective measures (scales and questionnaires). METHODS: This is an experimental study with a total of 20 participants with MS. Voice samples were collected, and biomechanical correlates were analyzed through the Clinical Voice Systems program, Online Lab App. The VHI-30 (Voice Handicap Index) questionnaire, the GRBAS (grade, roughness, breathiness, asthenia, strain) scale, and the Hospital Anxiety and Depression Scale were used as subjective measures. RESULTS: Ninety-five percentages of participants feel and describe dysphonic difficulties. Self-perception of vocal disability correlated with auditory vocal perceptual analysis in the sample of women. CONCLUSION: The biomechanical parameters showed alterations in the strength of the glottic closure, the efficiency index, and the structural imbalance index.

20.
Behav Res Methods ; 56(7): 6915-6950, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38829553

RESUMO

This tutorial is designed for speech scientists familiar with the R programming language who wish to construct experiment interfaces in R. We begin by discussing some of the benefits of building experiment interfaces in R-including R's existing tools for speech data analysis, platform independence, suitability for web-based testing, and the fact that R is open source. We explain basic concepts of reactive programming in R, and we apply these principles by detailing the development of two sample experiments. The first of these experiments comprises a speech production task in which participants are asked to read words with different emotions. The second sample experiment involves a speech perception task, in which participants listen to recorded speech and identify the emotion the talker expressed with forced-choice questions and confidence ratings. Throughout this tutorial, we introduce the new R package speechcollectr, which provides functions uniquely suited to web-based speech data collection. The package streamlines the code required for speech experiments by providing functions for common tasks like documenting participant consent, collecting participant demographic information, recording audio, checking the adequacy of a participant's microphone or headphones, and presenting audio stimuli. Finally, we describe some of the difficulties of remote speech data collection, along with the solutions we have incorporated into speechcollectr to meet these challenges.


Assuntos
Internet , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Fala/fisiologia , Coleta de Dados/métodos , Linguagens de Programação , Software
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa