Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 10 de 10
1.
J Neurophysiol ; 128(5): 1106-1116, 2022 11 01.
Article En | MEDLINE | ID: mdl-36130171

Coordination between speech acoustics and manual gestures has been conceived as "not biologically mandated" (McClave E. J Psycholinguist Res 27(1): 69-89, 1998). However, recent work suggests a biomechanical entanglement between the upper limbs and the respiratory-vocal system (Pouw W, de Jonge-Hoekstra D, Harrison SJ, Paxton A, Dixon JA. Ann NY Acad Sci 1491(1): 89-105, 2021). Pouw et al. found that for movements with a high physical impulse, speech acoustics co-occur with the physical impulses of upper limb movements. They interpret this result in terms of biomechanical coupling between arm motion and speech via the breathing system. This coupling could support the synchrony observed between speech prosody and arm gestures during communication. The present study investigates whether the effect of physical impulse on speech acoustics can be extended to leg motion, assumed to be controlled independently from oral communication. The study involved 25 native speakers of German who recalled short stories while biking with their arms or their legs. These conditions were compared with a static condition in which participants could not move their arms. Our analyses are similar to that of Pouw et al. (Pouw W, de Jonge-Hoekstra D, Harrison SJ, Paxton A, Dixon JA. Ann NY Acad Sci 1491(1): 89-105, 2021). Results reveal that the presence of intensity peaks in the acoustic signal co-occur with the time of peak acceleration of legs' biking movements. However, this was not observed when biking with the arms, which corresponded to lower acceleration peaks. In contrast to intensity, F0 was not affected in the arm and leg conditions. These results suggest that 1) the biomechanical entanglements between the respiratory-vocal system and the lower limbs may also impact speech; 2) the physical impulse may have to reach a threshold to impact speech acoustics.NEW & NOTEWORTHY The link between speech and limb motion is an interdisciplinary challenge and a core issue in motor control and language research. Our research aims to disentangle the potential biomechanical links between lower limbs and the speech apparatus, by investigating the effect of leg movements on speech acoustics.


Leg , Speech , Movement , Arm , Upper Extremity
2.
Dev Sci ; 25(1): e13154, 2022 01.
Article En | MEDLINE | ID: mdl-34251076

Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.


Gestures , Speech Perception , Adult , Child , Child, Preschool , Cues , Humans , Language , Language Development , Speech
3.
Ann N Y Acad Sci ; 1505(1): 142-155, 2021 12.
Article En | MEDLINE | ID: mdl-34418103

Breathing is variable but also highly individual. Since the 1980s, evidence of a ventilatory personality has been observed in different physiological studies. This original term refers to within-speaker consistency in breathing characteristics across days or even years. Speech breathing is a specific way to control ventilation while supporting speech planning and phonation constraints. It is highly variable between speakers but also for the same speaker, depending on utterance properties, bodily actions, and the context of an interaction. Can we yet still observe consistency over time in speakers' breathing profiles despite these variations? We addressed this question by analyzing the breathing profiles of 25 native speakers of German performing a narrative task on 2 days under different limb movement conditions. The individuality of breathing profiles over conditions and days was assessed by adopting methods used in physiological studies that investigated a ventilatory personality. Our results suggest that speaker-specific breathing profiles in a narrative task are maintained over days and that they stay consistent despite light physical activity. These results are discussed with a focus on better understanding what speech breathing individuality is, how it can be assessed, and the types of research perspectives that this concept opens up.


Exercise Test/trends , Extremities/physiology , Movement/physiology , Psychomotor Performance/physiology , Respiratory Mechanics/physiology , Speech/physiology , Adult , Biomechanical Phenomena/physiology , Exercise Test/methods , Female , Humans , Male , Speech Acoustics , Young Adult
4.
Front Psychol ; 10: 2019, 2019.
Article En | MEDLINE | ID: mdl-31620039

Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.

5.
J Speech Lang Hear Res ; 61(4): 957-972, 2018 04 17.
Article En | MEDLINE | ID: mdl-29635399

Purpose: This work evaluates whether seeing the speaker's face could improve the speech intelligibility of adults with Down syndrome (DS). This is not straightforward because DS induces a number of anatomical and motor anomalies affecting the orofacial zone. Method: A speech-in-noise perception test was used to evaluate the intelligibility of 16 consonants (Cs) produced in a vowel-consonant-vowel context (Vo = /a/) by 4 speakers with DS and 4 control speakers. Forty-eight naïve participants were asked to identify the stimuli in 3 modalities: auditory (A), visual (V), and auditory-visual (AV). The probability of correct responses was analyzed, as well as AV gain, confusions, and transmitted information as a function of modality and phonetic features. Results: The probability of correct response follows the trend AV > A > V, with smaller values for the DS than the control speakers in A and AV but not in V. This trend depended on the C: the V information particularly improved the transmission of place of articulation and to a lesser extent of manner, whereas voicing remained specifically altered in DS. Conclusions: The results suggest that the V information is intact in the speech of people with DS and improves the perception of some phonetic features in Cs in a similar way as for control speakers. This result has implications for further studies, rehabilitation protocols, and specific training of caregivers. Supplemental Material: https://doi.org/10.23641/asha.6002267.


Down Syndrome/psychology , Facial Recognition , Phonetics , Speech Perception , Adult , Female , Humans , Male , Speech Intelligibility , Young Adult
6.
Schizophr Bull ; 41(1): 259-67, 2015 Jan.
Article En | MEDLINE | ID: mdl-24553150

BACKGROUND: Task-based functional neuroimaging studies of schizophrenia have not yet replicated the increased coordinated hyperactivity in speech-related brain regions that is reported with symptom-capture and resting-state studies of hallucinations. This may be due to suboptimal selection of cognitive tasks. METHODS: In the current study, we used a task that allowed experimental manipulation of control over verbal material and compared brain activity between 23 schizophrenia patients (10 hallucinators, 13 nonhallucinators), 22 psychiatric (bipolar), and 27 healthy controls. Two conditions were presented, one involving inner verbal thought (in which control over verbal material was required) and another involving speech perception (SP; in which control verbal material was not required). RESULTS: A functional connectivity analysis resulted in a left-dominant temporal-frontal network that included speech-related auditory and motor regions and showed hypercoupling in past-week hallucinating schizophrenia patients (relative to nonhallucinating patients) during SP only. CONCLUSIONS: These findings replicate our previous work showing generalized speech-related functional network hypercoupling in schizophrenia during inner verbal thought and SP, but extend them by suggesting that hypercoupling is related to past-week hallucination severity scores during SP only, when control over verbal material is not required. This result opens the possibility that practicing control over inner verbal thought processes may decrease the likelihood or severity of hallucinations.


Frontal Lobe/physiopathology , Functional Laterality/physiology , Hallucinations/physiopathology , Neural Pathways/physiopathology , Schizophrenia/physiopathology , Schizophrenic Psychology , Speech Perception/physiology , Temporal Lobe/physiopathology , Adult , Bipolar Disorder/physiopathology , Brain/physiopathology , Brain Mapping , Case-Control Studies , Female , Functional Neuroimaging , Hallucinations/etiology , Hallucinations/psychology , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Schizophrenia/complications , Young Adult
7.
Hum Brain Mapp ; 34(10): 2574-91, 2013 Oct.
Article En | MEDLINE | ID: mdl-22488985

This functional magnetic resonance imaging (fMRI) study aimed at examining the cerebral regions involved in the auditory perception of prosodic focus using a natural focus detection task. Two conditions testing the processing of simple utterances in French were explored, narrow-focused versus broad-focused. Participants performed a correction detection task. The utterances in both conditions had exactly the same segmental, lexical, and syntactic contents, and only differed in their prosodic realization. The comparison between the two conditions therefore allowed us to examine processes strictly associated with prosodic focus processing. To assess the specific effect of pitch on hemispheric specialization, a parametric analysis was conducted using a parameter reflecting pitch variations specifically related to focus. The comparison between the two conditions reveals that brain regions recruited during the detection of contrastive prosodic focus can be described as a right-hemisphere dominant dual network consisting of (a) ventral regions which include the right posterosuperior temporal and bilateral middle temporal gyri and (b) dorsal regions including the bilateral inferior frontal, inferior parietal and left superior parietal gyri. Our results argue for a dual stream model of focus perception compatible with the asymmetric sampling in time hypothesis. They suggest that the detection of prosodic focus involves an interplay between the right and left hemispheres, in which the computation of slowly changing prosodic cues in the right hemisphere dynamically feeds an internal model concurrently used by the left hemisphere, which carries out computations over shorter temporal windows.


Brain Mapping/methods , Cerebral Cortex/physiology , Language , Magnetic Resonance Imaging , Speech Perception/physiology , Adult , Cues , Dominance, Cerebral/physiology , Female , Humans , Male , Models, Neurological , Models, Psychological , Nerve Net/physiology , Phonation , Pitch Discrimination/physiology , Pitch Perception/physiology , Young Adult
8.
J Speech Lang Hear Res ; 56(6): S1882-93, 2013 Dec.
Article En | MEDLINE | ID: mdl-24687444

PURPOSE: Auditory verbal hallucinations (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a predictive control model, in which individuals implement verbal self-monitoring. The authors examined lip muscle activity during AVHs in patients with schizophrenia to check whether inner speech occurred. METHOD: Lip muscle activity was recorded during covert AVHs (without articulation) and rest. Surface electromyography (EMG) was used on 11 patients with schizophrenia. RESULTS: Results showed an increase in EMG activity in the orbicularis oris inferior muscle during covert AVHs relative to rest. This increase was not due to general muscular tension because there was no increase of muscular activity in the forearm muscle. CONCLUSION: This evidence that AVHs might be self-generated inner speech is discussed in the framework of a predictive control model. Further work is needed to better describe how inner speech is controlled and monitored and the nature of inner-speech-monitoring-dysfunction. This will lead to a better understanding of how AVHs occur.


Facial Muscles/physiology , Hallucinations/physiopathology , Lip/physiology , Models, Neurological , Schizophrenia/physiopathology , Speech/physiology , Adult , Electromyography , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Speech Perception/physiology , Young Adult
9.
Psychiatry Res ; 202(2): 110-7, 2012 May 31.
Article En | MEDLINE | ID: mdl-22703623

An important aspect of schizophrenia symptomatology is inner-outer confusion, or blurring of ego boundaries, which is linked to symptoms such as hallucinations and Schneiderian delusions. Dysfunction in the cognitive processes involved in the generation of private thoughts may contribute to blurring of the ego boundaries through increased activation in functional networks including speech- and voice-selective cortical regions. In the present study, the neural underpinnings of silent verbal thought generation and speech perception were investigated using functional magnetic resonance imaging (fMRI). Functional connectivity analysis was performed using constrained principal component analysis for fMRI (fMRI-CPCA). Group differences were observable on two functional networks: one reflecting hyperactivity in speech- and voice-selective cortical regions (e.g., bilateral superior temporal gyri (STG)) during both speech perception and silent verbal thought generation, and another involving hyperactivity in a multiple demands (i.e., task-positive) network that included Wernicke's area, during silent verbal thought generation. This set of preliminary results suggests that hyperintensity of functional networks involving voice-selective cortical regions may contribute to the blurring of ego boundaries characteristic of schizophrenia.


Brain Mapping , Cerebral Cortex/pathology , Hallucinations/pathology , Schizophrenia/pathology , Schizophrenic Psychology , Speech Perception/physiology , Adult , Analysis of Variance , Cerebral Cortex/blood supply , Female , Hallucinations/etiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Nerve Net/blood supply , Nerve Net/pathology , Oxygen/blood , Principal Component Analysis , Schizophrenia/complications , Time Factors , Vocabulary , Voice , Young Adult
10.
Lang Speech ; 52(Pt 2-3): 177-206, 2009.
Article En | MEDLINE | ID: mdl-19624029

Prosodic contrastive focus is used to attract the listener's attention to a specific part of the utterance. Mostly conceived of as auditory/acoustic, it also has visible correlates which have been shown to be perceived. This study aimed at analyzing auditory-visual perception of prosodic focus by elaborating a paradigm enabling an auditory-visual advantage measurement (avoiding the ceiling effect) and by examining the interaction between audition and vision. A first experiment proved the efficiency of a whispered speech paradigm to measure an auditory-visual advantage for the perception of prosodic features. A second experiment used this paradigm to examine and characterize the auditory-visual perceptual processes. It combined performance assessment (focus detection score) to reaction time measurements and confirmed and extended the results from the first experiment. This study showed that adding vision to audition for perception of prosodic focus can not only improve focus detection but also reduce reaction times. A further analysis suggested that audition and vision are actually integrated for the perception of prosodic focus. Visual-only perception appeared to be facilitated for whispered speech suggesting an enhancement of visual cues in whispering. Moreover, the potential influence of the presence of facial markers on perception is discussed.


Auditory Perception , Face , Speech Perception , Visual Perception , Adult , Analysis of Variance , Cues , Female , Humans , Male , Middle Aged , Psycholinguistics , Reaction Time , Speech , Speech Acoustics , Surveys and Questionnaires , Young Adult
...