Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Cognition ; 248: 105793, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38636164

RESUMO

Speech comprehension is enhanced when preceded (or accompanied) by a congruent rhythmic prime reflecting the metrical sentence structure. Although these phenomena have been described for auditory and motor primes separately, their respective and synergistic contribution has not been addressed. In this experiment, participants performed a speech comprehension task on degraded speech signals that were preceded by a rhythmic prime that could be auditory, motor or audiomotor. Both auditory and audiomotor rhythmic primes facilitated speech comprehension speed. While the presence of a purely motor prime (unpaced tapping) did not globally benefit speech comprehension, comprehension accuracy scaled with the regularity of motor tapping. In order to investigate inter-individual variability, participants also performed a Spontaneous Speech Synchronization test. The strength of the estimated perception-production coupling correlated positively with overall speech comprehension scores. These findings are discussed in the framework of the dynamic attending and active sensing theories.


Assuntos
Compreensão , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Adulto Jovem , Compreensão/fisiologia , Adulto , Estimulação Acústica , Desempenho Psicomotor/fisiologia , Percepção Auditiva/fisiologia , Fala/fisiologia
2.
J Fluency Disord ; 76: 105975, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37247502

RESUMO

PURPOSE: Speaking with an external rhythm has a tremendous fluency-enhancing effect in people who stutter. The aim of the present study is to examine whether syllabic timing related to articulatory timing (c-center) would differ between children and adolescents who stutter and a matched control group in an unpaced vs. a paced condition. METHODS: We recorded 48 German-speaking children and adolescents who stutter and a matched control group reading monosyllabic words with and without a metronome (unpaced and paced condition). Analyses were conducted on four minimal pairs that differed in onset complexity (simple vs. complex). The following acoustic correlates of a c-center effect were analyzed: vowel and consonant compression, acoustic intervals (time from c-center, left-edge, and right-edge to an anchor-point), and relative standard deviations of these intervals. RESULTS: Both groups show acoustic correlates of a c-center effect (consonant compression, vowel compression, c-center organization, and more stable c-center intervals), independently of condition. However, the group who stutters had a more pronounced consonant compression effect. The metronome did not significantly affect syllabic organization but interval stability improved in the paced condition in both groups. CONCLUSION: Children and adolescents who stutter and matched controls have a similar syllable organization, related to articulatory timing, regardless of paced or unpaced speech. However, consonant onset timing differs between the group who stutters and the control group; this is a promising basis for conducting an articulatory study in which articulatory (gestural) timing can be examined in more detail.


Assuntos
Fala , Gagueira , Humanos , Criança , Adolescente , Medida da Produção da Fala , Idioma , Acústica
3.
Front Hum Neurosci ; 16: 885074, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36188179

RESUMO

Auditory feedback perturbation studies have indicated a link between feedback and feedforward mechanisms in speech production when participants compensate for applied shifts. In spectral perturbation studies, speakers with a higher perceptual auditory acuity typically compensate more than individuals with lower acuity. However, the reaction to feedback perturbation is unlikely to be merely a matter of perceptual acuity but also affected by the prediction and production of precise motor action. This interplay between prediction, perception, and motor execution seems to be crucial for the timing of speech and non-speech motor actions. In this study, to examine the relationship between the responses to temporally perturbed auditory feedback and rhythmic abilities, we tested 45 adult speakers on the one hand with a temporal auditory feedback perturbation paradigm, and on the other hand with rhythm perception and production tasks. The perturbation tasks temporally stretched and compressed segments (onset + vowel or vowel + coda) in fluent speech in real-time. This technique sheds light on the temporal representation and the production flexibility of timing mechanisms in fluent speech with respect to the structure of the syllable. The perception tasks contained staircase paradigms capturing duration discrimination abilities and beat-alignment judgments. The rhythm production tasks consisted of finger tapping tasks taken from the BAASTA tapping battery and additional speech tapping tasks. We found that both auditory acuity and motor stability in finger tapping affected responses to temporal auditory feedback perturbation. In general, speakers with higher auditory acuity and higher motor variability compensated more. However, we observed a different weighting of auditory acuity and motor stability dependent on the prosodic structure of the perturbed sequence and the nature of the response as purely online or adaptive. These findings shed light on the interplay of phonological structure with feedback and feedforward integration for timing mechanisms in speech.

4.
Brain Sci ; 11(12)2021 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-34942897

RESUMO

Speech fluency is a major challenge for young persons who stutter. Reading aloud, in particular, puts high demands on fluency, not only regarding online text decoding and articulation, but also in terms of prosodic performance. A written text has to be segmented into a number of prosodic phrases with appropriate breaks. The present study examines to what extent reading fluency (decoding ability, articulation rate, and prosodic phrasing) may be altered in children (9-12 years) and adolescents (13-17 years) who stutter compared to matched control participants. Read speech of 52 children and adolescents who do and do not stutter was analyzed. Children and adolescents who stutter did not differ from their matched control groups regarding reading accuracy and articulation rate. However, children who stutter produced shorter pauses than their matched peers. Results on prosodic phrasing showed that children who stutter produced more major phrases than the control group and more intermediate phrases than adolescents who stutter. Participants who stutter also displayed a higher number of breath pauses. Generally, the number of disfluencies during reading was related to slower articulation rates and more prosodic boundaries. Furthermore, we found age-related changes in general measures of reading fluency (decoding ability and articulation rate), as well as the overall strength of prosodic boundaries and number of breath pauses. This study provides evidence for developmental stages in prosodic phrasing as well as for alterations in reading fluency in children who stutter.

5.
Brain Sci ; 11(11)2021 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-34827523

RESUMO

In the present study, we investigated if individuals with neurogenic speech sound impairments of three types, Parkinson's dysarthria, apraxia of speech, and aphasic phonological impairment, accommodate their speech to the natural speech rhythm of an auditory model, and if so, whether the effect is more significant after hearing metrically regular sentences as compared to those with an irregular pattern. This question builds on theories of rhythmic entrainment, assuming that sensorimotor predictions of upcoming events allow humans to synchronize their actions with an external rhythm. To investigate entrainment effects, we conducted a sentence completion task relating participants' response latencies to the spoken rhythm of the prime heard immediately before. A further research question was if the perceived rhythm interacts with the rhythm of the participants' own productions, i.e., the trochaic or iambic stress pattern of disyllabic target words. For a control group of healthy speakers, our study revealed evidence for entrainment when trochaic target words were preceded by regularly stressed prime sentences. Persons with Parkinson's dysarthria showed a pattern similar to that of the healthy individuals. For the patient groups with apraxia of speech and with phonological impairment, considerably longer response latencies with differing patterns were observed. Trochaic target words were initiated with significantly shorter latencies, whereas the metrical regularity of prime sentences had no consistent impact on response latencies and did not interact with the stress pattern of the target words to be produced. The absence of an entrainment in these patients may be explained by the more severe difficulties in initiating speech at all. We discuss the results in terms of clinical implications for diagnostics and therapy in neurogenic speech disorders.

6.
Atten Percept Psychophys ; 83(4): 1861-1877, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33709327

RESUMO

Auditory rhythms create powerful expectations for the listener. Rhythmic cues with the same temporal structure as subsequent sentences enhance processing compared with irregular or mismatched cues. In the present study, we focus on syllable detection following matched rhythmic cues. Cues were aligned with subsequent sentences at the syllable (low-level cue) or the accented syllable (high-level cue) level. A different group of participants performed the task without cues to provide a baseline. We hypothesized that unaccented syllable detection would be faster after low-level cues, and accented syllable detection would be faster after high-level cues. There was no difference in syllable detection depending on whether the sentence was preceded by a high-level or low-level cue. However, the results revealed a priming effect of the cue that participants heard first. Participants who heard a high-level cue first were faster to detect accented than unaccented syllables, and faster to detect accented syllables than participants who heard a low-level cue first. The low-level-first participants showed no difference between detection of accented and unaccented syllables. The baseline experiment confirmed that hearing a low-level cue first removed the benefit of the high-level grouping structure for accented syllables. These results suggest that the initially perceived rhythmic structure influenced subsequent cue perception and its influence on syllable detection. Results are discussed in terms of dynamic attending, temporal context effects, and implications for context effects in neural entrainment.


Assuntos
Sinais (Psicologia) , Percepção da Fala , Audição , Humanos , Idioma , Fonética
7.
Infancy ; 26(2): 248-270, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33523572

RESUMO

When adults speak or sing with infants, they sound differently than in adult communication. Infant-directed (ID) communication helps caregivers to regulate infants' emotions and helps infants to process speech information, at least from ID-speech. However, it is largely unclear whether infants might also process speech information presented in ID-singing. Therefore, we examined whether infants discriminate vowels in ID-singing, as well as potential differences with ID-speech. Using an alternating trial preference procedure, infants aged 4-6 and 8-10 months were tested on their discrimination of an unfamiliar non-native vowel contrast presented in ID-like speech and singing. Relying on models of early speech sound perception, we expected that infants in their first half year of life would discriminate the vowels, in contrast to older infants whose non-native sound perception should deteriorate, at least in ID-like speech. Our results showed that infants of both age groups were able to discriminate the vowels in ID-like singing, while only the younger group discriminated the vowels in ID-like speech. These results show that infants process speech sound information in song from early on. They also hint at diverging perceptual or attentional mechanisms guiding infants' sound processing in ID-speech versus ID-singing toward the end of the first year of life.


Assuntos
Fonética , Canto , Percepção da Fala , Feminino , Humanos , Lactente , Masculino
8.
Ear Hear ; 42(2): 364-372, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32769439

RESUMO

OBJECTIVES: Children with hearing loss (HL), in spite of early cochlear implantation, often struggle considerably with language acquisition. Previous research has shown a benefit of rhythmic training on linguistic skills in children with HL, suggesting that improving rhythmic capacities could help attenuating language difficulties. However, little is known about general rhythmic skills of children with HL and how they relate to speech perception. The aim of this study is twofold: (1) to assess the abilities of children with HL in different rhythmic sensorimotor synchronization tasks compared to a normal-hearing control group and (2) to investigate a possible relation between sensorimotor synchronization abilities and speech perception abilities in children with HL. DESIGN: A battery of sensorimotor synchronization tests with stimuli of varying acoustic and temporal complexity was used: a metronome, different musical excerpts, and complex rhythmic patterns. Synchronization abilities were assessed in 32 children (aged from 5 to 10 years) with a severe to profound HL mainly fitted with one or two cochlear implants (n = 28) or with hearing aids (n = 4). Working memory and sentence repetition abilities were also assessed. Performance was compared to an age-matched control group of 24 children with normal hearing. The comparison took into account variability in working memory capacities. For children with HL only, we computed linear regressions on speech, sensorimotor synchronization, and working memory abilities, including device-related variables such as onset of device use, type of device, and duration of use. RESULTS: Compared to the normal-hearing group, children with HL performed poorly in all sensorimotor synchronization tasks, but the effect size was greater for complex as compared to simple stimuli. Group differences in working memory did not explain this result. Linear regression analysis revealed that working memory, synchronization to complex rhythms performances, age, and duration of device use predicted the number of correct syllables produced in a sentence repetition task. CONCLUSION: Despite early cochlear implantation or hearing aid use, hearing impairment affects the quality of temporal processing of acoustic stimuli in congenitally deaf children. This deficit seems to be more severe with stimuli of increasing rhythmic complexity highlighting a difficulty in structuring sounds according to a temporal hierarchy.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Criança , Pré-Escolar , Humanos
9.
J Acoust Soc Am ; 150(6): 4429, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34972287

RESUMO

Nursery rhymes, lullabies, or traditional stories are pieces of oral tradition that constitute an integral part of communication between caregivers and preverbal infants. Caregivers use a distinct acoustic style when singing or narrating to their infants. Unlike spontaneous infant-directed (ID) interactions, codified interactions benefit from highly stable acoustics due to their repetitive character. The aim of the study was to determine whether specific combinations of acoustic traits (i.e., vowel pitch, duration, spectral structure, and their variability) form characteristic "signatures" of different communicative dimensions during codified interactions, such as vocalization type, interactive stimulation, and infant-directedness. Bayesian analysis, applied to over 14 000 vowels from codified live interactions between mothers and their 6-months-old infants, showed that a few acoustic traits prominently characterize arousing vs calm interactions and sung vs spoken interactions. While pitch and duration and their variation played a prominent role in constituting these signatures, more linguistic aspects such as vowel clarity showed small or no effects. Infant-directedness was identifiable in a larger set of acoustic cues than the other dimensions. These findings provide insights into the functions of acoustic variation of ID communication and into the potential role of codified interactions for infants' learning about communicative intent and expressive forms typical of language and music.


Assuntos
Relações Mãe-Filho , Acústica da Fala , Acústica , Teorema de Bayes , Comunicação , Humanos , Lactente
10.
Infant Behav Dev ; 61: 101475, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32768730

RESUMO

Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).


Assuntos
Aprendizagem por Discriminação/fisiologia , Desenvolvimento da Linguagem , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Feminino , Humanos , Lactente , Masculino , Fala/fisiologia
11.
J Speech Lang Hear Res ; 62(8S): 3104-3118, 2019 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-31465708

RESUMO

Purpose Earlier investigations based on word and sentence repetition tasks had revealed that the most prevalent metrical pattern in German (the trochee)-unlike the iambic pattern-facilitates articulation in patients with apraxia of speech (AOS; e.g., Aichert, Späth, & Ziegler, 2016), confirming that segmental and prosodic aspects of speech production interact. In this study, we investigated if articulation in apraxic speakers also benefits from auditory priming by speech with a regular rhythm. Furthermore, we asked if the advantage of regular speech rhythm, if present, is confined to impairments at the motor planning stage of speech production (i.e., AOS) or if it also applies to phonological encoding impairments. Method Twelve patients with AOS, 12 aphasic patients with postlexical phonological impairment (PI), and 36 neurologically healthy speakers were examined. A sequential synchronization paradigm based on a sentence completion task was conducted in conditions where we independently varied the metrical regularity of the prime sentence (regular vs. irregular prime sentence) and the metrical regularity of the target word (trochaic vs. iambic). Results Our data confirmed the facilitating effect of regular (trochaic) word stress on speech accuracy in patients with AOS (target effect). This effect could, for the first time, also be demonstrated in individuals with PI. Moreover, the study also revealed an influence of the metrical regularity of speech input in both patient groups (prime effect). Conclusions Patients with AOS and patients with PI exploited rhythmic cues in the speech of a model speaker for the initiation and the segmental realization of words. There seems to be a robust metrical influence on speech at both the phonological and the phonetic planning stages of speech production.


Assuntos
Estimulação Acústica/métodos , Distúrbios da Fala/terapia , Adulto , Idoso , Apraxias/terapia , Percepção Auditiva/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fala/fisiologia , Medida da Produção da Fala
12.
Ann N Y Acad Sci ; 1453(1): 79-98, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31237365

RESUMO

Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross-species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross-species perspective of speech rhythm, our review puts some pieces of the puzzle together.


Assuntos
Idioma , Periodicidade , Fala/fisiologia , Vocalização Animal/fisiologia , Animais , Evolução Biológica , Humanos
13.
Hear Res ; 351: 11-18, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28552493

RESUMO

This study investigates temporal adaptation in speech interaction in children with normal hearing and in children with cochlear implants (CIs) and/or hearing aids (HAs). We also address the question of whether musical rhythmic training can improve these skills in children with hearing loss (HL). Children named pictures presented on the screen in alternation with a virtual partner. Alternation rate (fast or slow) and the temporal predictability (match vs mismatch of stress occurrences) were manipulated. One group of children with normal hearing (NH) and one with HL were tested. The latter group was tested twice: once after 30 min of speech therapy and once after 30 min of musical rhythmic training. Both groups of children (NH and with HL) can adjust their speech production to the rate of alternation of the virtual partner. Moreover, while children with normal hearing benefit from the temporal regularity of stress occurrences, children with HL become sensitive to this manipulation only after rhythmic training. Rhythmic training may help children with HL to structure the temporal flow of their verbal interactions.


Assuntos
Sinais (Psicologia) , Perda Auditiva/reabilitação , Audição , Música , Periodicidade , Pessoas com Deficiência Auditiva/reabilitação , Fala , Percepção do Tempo , Fatores Etários , Percepção Auditiva , Estudos de Casos e Controles , Criança , Comportamento Infantil , Linguagem Infantil , Pré-Escolar , Implantes Cocleares , Feminino , Auxiliares de Audição , Perda Auditiva/diagnóstico , Perda Auditiva/fisiopatologia , Perda Auditiva/psicologia , Humanos , Masculino , Pessoas com Deficiência Auditiva/psicologia , Medida da Produção da Fala , Fatores de Tempo , Comportamento Verbal
14.
Front Psychol ; 8: 395, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28443036

RESUMO

Moving to a speech rhythm can enhance verbal processing in the listener by increasing temporal expectancies (Falk and Dalla Bella, 2016). Here we tested whether this hypothesis holds for prosodically diverse languages such as German (a lexical stress-language) and French (a non-stress language). Moreover, we examined the relation between motor performance and the benefits for verbal processing as a function of language. Sixty-four participants, 32 German and 32 French native speakers detected subtle word changes in accented positions in metrically structured sentences to which they previously tapped with their index finger. Before each sentence, they were cued by a metronome to tap either congruently (i.e., to accented syllables) or incongruently (i.e., to non-accented parts) to the following speech stimulus. Both French and German speakers detected words better when cued to tap congruently compared to incongruent tapping. Detection performance was predicted by participants' motor performance in the non-verbal cueing phase. Moreover, tapping rate while participants tapped to speech predicted detection differently for the two language groups, in particular in the incongruent tapping condition. We discuss our findings in light of the rhythmic differences of both languages and with respect to recent theories of expectancy-driven and multisensory speech processing.

15.
J Cogn Neurosci ; 29(8): 1378-1389, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28430043

RESUMO

Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.


Assuntos
Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Periodicidade , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Música , Estatísticas não Paramétricas , Adulto Jovem
16.
Cognition ; 163: 80-86, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28292666

RESUMO

Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language.


Assuntos
Relações Mãe-Filho , Canto , Acústica da Fala , Fala , Adulto , Análise por Conglomerados , Feminino , Humanos , Lactente , Masculino , Fonética , Processamento de Sinais Assistido por Computador , Fatores de Tempo
17.
Child Dev ; 88(4): 1207-1215, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-27796032

RESUMO

In their everyday communication, parents do not only speak but also sing with their infants. However, it remains unclear whether infants' can discriminate speech from song or prefer one over the other. The present study examined the ability of 6- to 10-month-old infants (N = 66) from English-speaking households in London, Ontario, Canada to discriminate between auditory stimuli of native Russian-speaking and native English-speaking mothers speaking or singing to their infants. Infants listened significantly longer to the sung stimuli compared to the spoken stimuli. This is the first study to demonstrate that, even in the absence of other multimodal cues, infant listeners are able to discriminate between sung and spoken stimuli, and furthermore, prefer to listen to sung stimuli over spoken stimuli.


Assuntos
Comportamento de Escolha/fisiologia , Comportamento do Lactente/fisiologia , Canto/fisiologia , Percepção da Fala/fisiologia , Feminino , Humanos , Lactente , Masculino
18.
J Commun Disord ; 62: 101-14, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27323225

RESUMO

UNLABELLED: Singing has long been used as a technique to enhance and reeducate temporal aspects of articulation in speech disorders. In the present study, differences in temporal structure of sung versus spoken speech were investigated in stuttering. In particular, the question was examined if singing helps to reduce VOT variability of voiceless plosives, which would indicate enhanced temporal coordination of oral and laryngeal processes. Eight German adolescents who stutter and eight typically fluent peers repeatedly spoke and sang a simple German congratulation formula in which a disyllabic target word (e.g., /'ki:ta/) was repeated five times. Every trial, the first syllable of the word was varied starting equally often with one of the three voiceless German stops /p/, /t/, /k/. Acoustic analyses showed that mean VOT and stop gap duration reduced during singing compared to speaking while mean vowel and utterance duration was prolonged in singing in both groups. Importantly, adolescents who stutter significantly reduced VOT variability (measured as the Coefficient of Variation) during sung productions compared to speaking in word-initial stressed positions while the control group showed a slight increase in VOT variability. However, in unstressed syllables, VOT variability increased in both adolescents who do and do not stutter from speech to song. In addition, vowel and utterance durational variability decreased in both groups, yet, adolescents who stutter were still more variable in utterance duration independent of the form of vocalization. These findings shed new light on how singing alters temporal structure and in particular, the coordination of laryngeal-oral timing in stuttering. Future perspectives for investigating how rhythmic aspects could aid the management of fluent speech in stuttering are discussed. LEARNING OUTCOMES: Readers will be able to describe (1) current perspectives on singing and its effects on articulation and fluency in stuttering and (2) acoustic parameters such as VOT variability which indicate the efficiency of control and coordination of laryngeal-oral movements. They will understand and be able to discuss (3) how singing reduces temporal variability in the productions of adolescents who do and do not stutter and 4) how this is linked to altered articulatory patterns in singing as well as to its rhythmic structure.


Assuntos
Fonética , Canto , Medida da Produção da Fala , Gagueira/terapia , Adolescente , Feminino , Humanos , Masculino , Fala , Distúrbios da Fala , Fatores de Tempo
19.
Front Psychol ; 6: 847, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26217245

RESUMO

There is growing evidence that motor and speech disorders co-occur during development. In the present study, we investigated whether stuttering, a developmental speech disorder, is associated with a predictive timing deficit in childhood and adolescence. By testing sensorimotor synchronization abilities, we aimed to assess whether predictive timing is dysfunctional in young participants who stutter (8-16 years). Twenty German children and adolescents who stutter and 43 non-stuttering participants matched for age and musical training were tested on their ability to synchronize their finger taps with periodic tone sequences and with a musical beat. Forty percent of children and 90% of adolescents who stutter displayed poor synchronization with both metronome and musical stimuli, falling below 2.5% of the estimated population based on the performance of the group without the disorder. Synchronization deficits were characterized by either lower synchronization accuracy or lower consistency or both. Lower accuracy resulted in an over-anticipation of the pacing event in participants who stutter. Moreover, individual profiles revealed that lower consistency was typical of participants that were severely stuttering. These findings support the idea that malfunctioning predictive timing during auditory-motor coupling plays a role in stuttering in children and adolescents.

20.
J Exp Psychol Hum Percept Perform ; 40(4): 1491-506, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24911013

RESUMO

Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In 2 Experiments, the effects of 2 pitch properties and 3 rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to reinterpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the 2 domains.


Assuntos
Percepção Auditiva/fisiologia , Música , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA