Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; : 1-27, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38457261

RESUMO

PURPOSE: One of the strategies that can be used to support speech communication in deaf children is cued speech, a visual code in which manual gestures are used as additional phonological information to supplement the acoustic and labial speech information. Cued speech has been shown to improve speech perception and phonological skills. This exploratory study aims to assess whether and how cued speech reading proficiency may also have a beneficial effect on the acoustic and articulatory correlates of consonant production in children. METHOD: Eight children with cochlear implants (from 5 to 11 years of age) and with different receptive proficiency in Canadian French Cued Speech (three children with low receptive proficiency vs. five children with high receptive proficiency) are compared to 10 children with typical hearing (from 4 to 11 years of age) on their production of stop and fricative consonants. Articulation was assessed with ultrasound measurements. RESULTS: The preliminary results reveal that cued speech proficiency seems to sustain the development of speech production in children with cochlear implants and to improve their articulatory gestures, particularly for the place contrast in stops as well as fricatives. CONCLUSION: This work highlights the importance of studying objective data and comparing acoustic and articulatory measurements to better characterize speech production in children.

2.
Front Hum Neurosci ; 17: 1152516, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37250702

RESUMO

Introduction: Early exposure to a rich linguistic environment is essential as soon as the diagnosis of deafness is made. Cochlear implantation (CI) allows children to have access to speech perception in their early years. However, it provides only partial acoustic information, which can lead to difficulties in perceiving some phonetic contrasts. This study investigates the contribution of two spoken speech and language rehabilitation approaches to speech perception in children with CI using a lexicality judgment task from the EULALIES battery. Auditory Verbal Therapy (AVT) is an early intervention program that relies on auditory learning to enhance hearing skills in deaf children with CI. French Cued Speech, also called Cued French (CF), is a multisensory communication tool that disambiguates lip reading by adding a manual gesture. Methods: In this study, 124 children aged from 60 to 140 months were included: 90 children with typical hearing skills (TH), 9 deaf children with CI who had participated in an AVT program (AVT), 6 deaf children with CI with high Cued French reading skills (CF+), and 19 deaf children with CI with low Cued French reading skills (CF-). Speech perception was assessed using sensitivity (d') using both the hit and false alarm rates, as defined in signal-detection theory. Results: The results show that children with cochlear implants from the CF- and CF+ groups have significantly lower performance compared to children with typical hearing (TH) (p < 0.001 and p = 0.033, respectively). Additionally, children in the AVT group also tended to have lower scores compared to TH children (p = 0.07). However, exposition to AVT and CF seems to improve speech perception. The scores of the children in the AVT and CF+ groups are closer to typical scores than those of children in the CF- group, as evidenced by a distance measure. Discussion: Overall, the findings of this study provide evidence for the effectiveness of these two speech and language rehabilitation approaches, and highlight the importance of using a specific approach in addition to a cochlear implant to improve speech perception in children with cochlear implants.

3.
Neuropsychologia ; 176: 108392, 2022 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-36216084

RESUMO

A computational model of speech perception, COSMO (Laurent et al., 2017), predicts that speech sounds should evoke both auditory representations in temporal areas and motor representations mainly in inferior frontal areas. Importantly, the model also predicts that auditory representations should be narrower, i.e. more focused on typical stimuli, than motor representations which would be more tolerant of atypical stimuli. Based on these assumptions, in a repetition-suppression study with functional magnetic resonance imaging data, we show that a sequence of 4 identical vowel sounds produces lower cortical activity (i.e. larger suppression effects) than if the last sound in the sequence is slightly varied. Crucially, temporal regions display an increase in cortical activity even for small acoustic variations, indicating a release of the suppression effect even for stimuli acoustically close to the first stimulus. In contrast, inferior frontal, premotor, insular and cerebellar regions show a release of suppression for larger acoustic variations. This "auditory-narrow motor-wide" pattern for vowel stimuli adds to a number of similar findings on consonant stimuli, confirming that the selectivity of speech sound representations in temporal auditory areas is narrower than in frontal motor areas in the human cortex.


Assuntos
Córtex Auditivo , Córtex Motor , Percepção da Fala , Humanos , Córtex Motor/fisiologia , Estimulação Acústica/métodos , Mapeamento Encefálico/métodos , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia
4.
Dev Sci ; 25(1): e13154, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34251076

RESUMO

Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.


Assuntos
Gestos , Percepção da Fala , Adulto , Criança , Pré-Escolar , Sinais (Psicologia) , Humanos , Idioma , Desenvolvimento da Linguagem , Fala
5.
Geroscience ; 43(4): 1725-1765, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33970414

RESUMO

In the absence of any neuropsychiatric condition, older adults may show declining performance in several cognitive processes and among them, in retrieving and producing words, reflected in slower responses and even reduced accuracy compared to younger adults. To overcome this difficulty, healthy older adults implement compensatory strategies, which are the focus of this paper. We provide a review of mainstream findings on deficient mechanisms and possible neurocognitive strategies used by older adults to overcome the deleterious effects of age on lexical production. Moreover, we present findings on genetic and lifestyle factors that might either be protective or risk factors of cognitive impairment in advanced age. We propose that "aging-modulating factors" (AMF) can be modified, offering prevention opportunities against aging effects. Based on our review and this proposition, we introduce an integrative neurocognitive model of mechanisms and compensatory strategies for lexical production in older adults (entitled Lexical Access and Retrieval in Aging, LARA). The main hypothesis defended in LARA is that cognitive aging evolves heterogeneously and involves complementary domain-general and domain-specific mechanisms, with substantial inter-individual variability, reflected at behavioral, cognitive, and brain levels. Furthermore, we argue that the ability to compensate for the effect of cognitive aging depends on the amount of reserve specific to each individual which is, in turn, modulated by the AMF. Our conclusion is that a variety of mechanisms and compensatory strategies coexist in the same individual to oppose the effect of age. The role of reserve is pivotal for a successful coping with age-related changes and future research should continue to explore the modulating role of AMF.


Assuntos
Reserva Cognitiva , Fatores Etários , Encéfalo
6.
J Acoust Soc Am ; 149(1): 191, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33514144

RESUMO

Acoustic characteristics, lingual and labial articulatory dynamics, and ventilatory behaviors were studied on a beatboxer producing twelve drum sounds belonging to five main categories of his repertoire (kick, snare, hi-hat, rimshot, cymbal). Various types of experimental data were collected synchronously (respiratory inductance plethysmography, electroglottography, electromagnetic articulography, and acoustic recording). Automatic unsupervised classification was successfully applied on acoustic data with t-SNE spectral clustering technique. A cluster purity value of 94% was achieved, showing that each sound has a specific acoustic signature. Acoustical intensity of sounds produced with the humming technique was found to be significantly lower than their non-humming counterparts. For these sounds, a dissociation between articulation and breathing was observed. Overall, a wide range of articulatory gestures was observed, some of which were non-linguistic. The tongue was systematically involved in the articulation of the explored beatboxing sounds, either as the main articulator or as accompanying the lip dynamics. Two pulmonic and three non-pulmonic airstream mechanisms were identified. Ejectives were found in the production of all the sounds with bilabial occlusion or alveolar occlusion with egressive airstream. A phonetic annotation using the IPA alphabet was performed, highlighting the complexity of such sound production and the limits of speech-based annotation.


Assuntos
Fonética , Fala , Acústica , Fenômenos Eletromagnéticos , Humanos , Música , Língua/diagnóstico por imagem
7.
Int J Psychophysiol ; 159: 23-36, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33159987

RESUMO

Previous research showed that mental rumination, considered as a form of repetitive and negative inner speech, is associated with increased facial muscular activity. However, the relation between these muscular activations and the underlying mental processes is still unclear. In this study, we tried to separate the facial electromyographic correlates of induced rumination related to either i) mechanisms of (inner) speech production or ii) rumination as a state of pondering on negative affects. To this end, we compared two groups of participants submitted to two types of rumination induction (for a total of 85 female undergraduate students without excessive depressive symptoms). The first type of induction was designed to specifically induce rumination in a verbal modality whereas the second one was designed to induce rumination in a visual modality. Following the motor simulation view of inner speech production, we hypothesised that the verbal rumination induction should result in a higher increase of activity in the speech-related muscles as compared to the non-verbal rumination induction. We also hypothesised that relaxation focused on the orofacial area should be more efficient in reducing rumination (when experienced in a verbal modality) than a relaxation focused on a non-orofacial area. Our results do not corroborate these hypotheses, as both rumination inductions resulted in a similar increase of peripheral muscular activity in comparison to baseline levels. Moreover, the two relaxation types were similarly efficient in reducing rumination, whatever the rumination induction. We discuss these results in relation to the inner speech literature and suggest that because rumination is a habitual and automatic form of emotion regulation, it might be a particularly (strongly) internalised and condensed form of inner speech. Pre-registered protocol, preprint, data, as well as reproducible code and figures are available at: https://osf.io/c9pag/.


Assuntos
Cognição , Fala , Face , Feminino , Humanos , Estudantes
8.
PLoS One ; 15(5): e0233282, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32459800

RESUMO

Although having a long history of scrutiny in experimental psychology, it is still controversial whether wilful inner speech (covert speech) production is accompanied by specific activity in speech muscles. We present the results of a preregistered experiment looking at the electromyographic correlates of both overt speech and inner speech production of two phonetic classes of nonwords. An automatic classification approach was undertaken to discriminate between two articulatory features contained in nonwords uttered in both overt and covert speech. Although this approach led to reasonable accuracy rates during overt speech production, it failed to discriminate inner speech phonetic content based on surface electromyography signals. However, exploratory analyses conducted at the individual level revealed that it seemed possible to distinguish between rounded and spread nonwords covertly produced, in two participants. We discuss these results in relation to the existing literature and suggest alternative ways of testing the engagement of the speech motor system during wilful inner speech production.


Assuntos
Eletromiografia , Músculo Esquelético/fisiologia , Fonética , Pensamento/fisiologia , Encéfalo/fisiologia , Feminino , Humanos , Reconhecimento Automatizado de Padrão , Fala/fisiologia , Adulto Jovem
9.
Front Psychol ; 10: 2019, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31620039

RESUMO

Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.

10.
J Speech Lang Hear Res ; 62(5): 1225-1242, 2019 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-31082309

RESUMO

Purpose Bayesian multilevel models are increasingly used to overcome the limitations of frequentist approaches in the analysis of complex structured data. This tutorial introduces Bayesian multilevel modeling for the specific analysis of speech data, using the brms package developed in R. Method In this tutorial, we provide a practical introduction to Bayesian multilevel modeling by reanalyzing a phonetic data set containing formant (F1 and F2) values for 5 vowels of standard Indonesian (ISO 639-3:ind), as spoken by 8 speakers (4 females and 4 males), with several repetitions of each vowel. Results We first give an introductory overview of the Bayesian framework and multilevel modeling. We then show how Bayesian multilevel models can be fitted using the probabilistic programming language Stan and the R package brms, which provides an intuitive formula syntax. Conclusions Through this tutorial, we demonstrate some of the advantages of the Bayesian framework for statistical modeling and provide a detailed case study, with complete source code for full reproducibility of the analyses ( https://osf.io/dpzcb /). Supplemental Material https://doi.org/10.23641/asha.7973822.


Assuntos
Idioma , Fonação , Fala , Teorema de Bayes , Feminino , Humanos , Indonésia , Masculino , Análise Multinível , Fonética , Caracteres Sexuais
11.
Dev Sci ; 22(6): e12830, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30908771

RESUMO

The influence of motor knowledge on speech perception is well established, but the functional role of the motor system is still poorly understood. The present study explores the hypothesis that speech production abilities may help infants discover phonetic categories in the speech stream, in spite of coarticulation effects. To this aim, we examined the influence of babbling abilities on consonant categorization in 6- and 9-month-old infants. Using an intersensory matching procedure, we investigated the infants' capacity to associate auditory information about a consonant in various vowel contexts with visual information about the same consonant, and to map auditory and visual information onto a common phoneme representation. Moreover, a parental questionnaire evaluated the infants' consonantal repertoire. In a first experiment using /b/-/d/ consonants, we found that infants who displayed babbling abilities and produced the /b/ and/or the /d/ consonants in repetitive sequences were able to correctly perform intersensory matching, while non-babblers were not. In a second experiment using the /v/-/z/ pair, which is as visually contrasted as the /b/-/d/ pair but which is usually not produced at the tested ages, no significant matching was observed, for any group of infants, babbling or not. These results demonstrate, for the first time, that the emergence of babbling could play a role in the extraction of vowel-independent representations for consonant place of articulation. They have important implications for speech perception theories, as they highlight the role of sensorimotor interactions in the development of phoneme representations during the first year of life.


Assuntos
Desenvolvimento da Linguagem , Fonética , Percepção da Fala/fisiologia , Linguagem Infantil , Retroalimentação Sensorial , Feminino , Humanos , Lactente , Idioma , Masculino
12.
Clin Linguist Phon ; 32(7): 595-621, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29148845

RESUMO

The rehabilitation of speech disorders benefits from providing visual information which may improve speech motor plans in patients. We tested the proof of concept of a rehabilitation method (Sensori-Motor Fusion, SMF; Ultraspeech player) in one post-stroke patient presenting chronic non-fluent aphasia. SMF allows visualisation by the patient of target tongue and lips movements using high-speed ultrasound and video imaging. This can improve the patient's awareness of his/her own lingual and labial movements, which can, in turn, improve the representation of articulatory movements and increase the ability to coordinate and combine articulatory gestures. The auditory and oro-sensory feedback received by the patient as a result of his/her own pronunciation can be integrated with the target articulatory movements they watch. Thus, this method is founded on sensorimotor integration during speech. The SMF effect on this patient was assessed through qualitative comparison of language scores and quantitative analysis of acoustic parameters measured in a speech production task, before and after rehabilitation. We also investigated cerebral patterns of language reorganisation for rhyme detection and syllable repetition, to evaluate the influence of SMF on phonological-phonetic processes. Our results showed that SMF had a beneficial effect on this patient who qualitatively improved in naming, reading, word repetition and rhyme judgment tasks. Quantitative measurements of acoustic parameters indicate that the patient's production of vowels and syllables also improved. Compared with pre-SMF, the fMRI data in the post-SMF session revealed the activation of cerebral regions related to articulatory, auditory and somatosensory processes, which were expected to be recruited by SMF. We discuss neurocognitive and linguistic mechanisms which may explain speech improvement after SMF, as well as the advantages of using this speech rehabilitation method.


Assuntos
Afasia de Broca/terapia , Idioma , Plasticidade Neuronal , Fonoterapia/métodos , Fala/fisiologia , Retroalimentação Sensorial/fisiologia , Feminino , Humanos , Lábio , Imageamento por Ressonância Magnética , Língua
13.
Biol Psychol ; 127: 53-63, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28465047

RESUMO

Rumination is predominantly experienced in the form of repetitive verbal thoughts. Verbal rumination is a particular case of inner speech. According to the Motor Simulation view, inner speech is a kind of motor action, recruiting the speech motor system. In this framework, we predicted an increase in speech muscle activity during rumination as compared to rest. We also predicted increased forehead activity, associated with anxiety during rumination. We measured electromyographic activity over the orbicularis oris superior and inferior, frontalis and flexor carpi radialis muscles. Results showed increased lip and forehead activity after rumination induction compared to an initial relaxed state, together with increased self-reported levels of rumination. Moreover, our data suggest that orofacial relaxation is more effective in reducing rumination than non-orofacial relaxation. Altogether, these results support the hypothesis that verbal rumination involves the speech motor system, and provide a promising psychophysiological index to assess the presence of verbal rumination.


Assuntos
Eletromiografia , Músculos Faciais/fisiologia , Ruminação Cognitiva/fisiologia , Fala/fisiologia , Ansiedade/fisiopatologia , Feminino , Testa/fisiologia , Humanos , Lábio/fisiologia , Adulto Jovem
14.
Clin Linguist Phon ; 31(7-9): 598-611, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28362227

RESUMO

Studies of speech production in French-speaking cochlear-implanted (CI) children are very scarce. Yet, difficulties in speech production have been shown to impact the intelligibility of these children. The goal of this study is to understand the effect of long-term use of cochlear implant on speech production, and more precisely on the coordination of laryngeal-oral gestures in stop production. The participants were all monolingual French children: 13 6;6- to 10;7-year-old CI children and 20 age-matched normally hearing (NH) children. We compared /p/, /t/, /k/, /b/, /d/ and /g/ in word-initial consonant-vowel sequences, produced in isolation in two different tasks, and we studied the effects of CI use, vowel context, task and age factors (i.e. chronological age, age at implantation and duration of implant use). Statistical analyses show a difference in voicing production between groups for voiceless consonants (shorter Voice Onset Times for CI children), with significance reached only for /k/, but no difference for voiced consonants. Our study indicates that in the long run, use of CI seems to have limited effects on the acquisition of oro-laryngeal coordination needed to produce voicing, except for specific difficulties located on velars. In a follow-up study, further acoustic analyses on vowel and fricative production by the same children reveal more difficulties, which suggest that cochlear implantation impacts frequency-based features (second formant of vowels and spectral moments of fricatives) more than durational cues (voicing).


Assuntos
Estimulação Acústica , Implantes Cocleares , Testes de Discriminação da Fala , Voz , Criança , Implante Coclear , Feminino , França , Humanos , Idioma , Masculino , Fonética
15.
Br J Psychol ; 108(1): 31-33, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28059459

RESUMO

This review of the literature on the emergence of language describes two opposing views of phonological development, the sound-based versus the whole-word-based accounts. An integrative model is proposed which claims that learning sublexical speech sounds and producing wordlike vocalizations are in fact parallel processes that feed each other during language development. We argue that this model might find unexpected support from the face processing literature.


Assuntos
Desenvolvimento da Linguagem , Aprendizagem , Fonética , Percepção da Fala , Humanos
16.
Infancy ; 20(6): 661-674, 2015 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-26561475

RESUMO

One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.

17.
Schizophr Bull ; 41(1): 259-67, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24553150

RESUMO

BACKGROUND: Task-based functional neuroimaging studies of schizophrenia have not yet replicated the increased coordinated hyperactivity in speech-related brain regions that is reported with symptom-capture and resting-state studies of hallucinations. This may be due to suboptimal selection of cognitive tasks. METHODS: In the current study, we used a task that allowed experimental manipulation of control over verbal material and compared brain activity between 23 schizophrenia patients (10 hallucinators, 13 nonhallucinators), 22 psychiatric (bipolar), and 27 healthy controls. Two conditions were presented, one involving inner verbal thought (in which control over verbal material was required) and another involving speech perception (SP; in which control verbal material was not required). RESULTS: A functional connectivity analysis resulted in a left-dominant temporal-frontal network that included speech-related auditory and motor regions and showed hypercoupling in past-week hallucinating schizophrenia patients (relative to nonhallucinating patients) during SP only. CONCLUSIONS: These findings replicate our previous work showing generalized speech-related functional network hypercoupling in schizophrenia during inner verbal thought and SP, but extend them by suggesting that hypercoupling is related to past-week hallucination severity scores during SP only, when control over verbal material is not required. This result opens the possibility that practicing control over inner verbal thought processes may decrease the likelihood or severity of hallucinations.


Assuntos
Lobo Frontal/fisiopatologia , Lateralidade Funcional/fisiologia , Alucinações/fisiopatologia , Vias Neurais/fisiopatologia , Esquizofrenia/fisiopatologia , Psicologia do Esquizofrênico , Percepção da Fala/fisiologia , Lobo Temporal/fisiopatologia , Adulto , Transtorno Bipolar/fisiopatologia , Encéfalo/fisiopatologia , Mapeamento Encefálico , Estudos de Casos e Controles , Feminino , Neuroimagem Funcional , Alucinações/etiologia , Alucinações/psicologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Esquizofrenia/complicações , Adulto Jovem
18.
Infant Behav Dev ; 37(4): 644-51, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25238663

RESUMO

The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio-visual fluent speech in 12-month-old infants. German-learning infants' audio-visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.


Assuntos
Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Lactente , Idioma , Masculino , Estimulação Luminosa , Percepção Visual/fisiologia
19.
PLoS One ; 9(2): e89275, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24586651

RESUMO

The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.


Assuntos
Aprendizagem por Associação , Percepção Auditiva/fisiologia , Aprendizagem por Discriminação , Idioma , Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Desenvolvimento Infantil , Feminino , França , Alemanha , Humanos , Lactente , Desenvolvimento da Linguagem , Masculino
20.
J Speech Lang Hear Res ; 56(6): S1882-93, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24687444

RESUMO

PURPOSE: Auditory verbal hallucinations (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a predictive control model, in which individuals implement verbal self-monitoring. The authors examined lip muscle activity during AVHs in patients with schizophrenia to check whether inner speech occurred. METHOD: Lip muscle activity was recorded during covert AVHs (without articulation) and rest. Surface electromyography (EMG) was used on 11 patients with schizophrenia. RESULTS: Results showed an increase in EMG activity in the orbicularis oris inferior muscle during covert AVHs relative to rest. This increase was not due to general muscular tension because there was no increase of muscular activity in the forearm muscle. CONCLUSION: This evidence that AVHs might be self-generated inner speech is discussed in the framework of a predictive control model. Further work is needed to better describe how inner speech is controlled and monitored and the nature of inner-speech-monitoring-dysfunction. This will lead to a better understanding of how AVHs occur.


Assuntos
Músculos Faciais/fisiologia , Alucinações/fisiopatologia , Lábio/fisiologia , Modelos Neurológicos , Esquizofrenia/fisiopatologia , Fala/fisiologia , Adulto , Eletromiografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...