Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
J Neurophysiol ; 128(5): 1106-1116, 2022 11 01.
Article in English | MEDLINE | ID: mdl-36130171

ABSTRACT

Coordination between speech acoustics and manual gestures has been conceived as "not biologically mandated" (McClave E. J Psycholinguist Res 27(1): 69-89, 1998). However, recent work suggests a biomechanical entanglement between the upper limbs and the respiratory-vocal system (Pouw W, de Jonge-Hoekstra D, Harrison SJ, Paxton A, Dixon JA. Ann NY Acad Sci 1491(1): 89-105, 2021). Pouw et al. found that for movements with a high physical impulse, speech acoustics co-occur with the physical impulses of upper limb movements. They interpret this result in terms of biomechanical coupling between arm motion and speech via the breathing system. This coupling could support the synchrony observed between speech prosody and arm gestures during communication. The present study investigates whether the effect of physical impulse on speech acoustics can be extended to leg motion, assumed to be controlled independently from oral communication. The study involved 25 native speakers of German who recalled short stories while biking with their arms or their legs. These conditions were compared with a static condition in which participants could not move their arms. Our analyses are similar to that of Pouw et al. (Pouw W, de Jonge-Hoekstra D, Harrison SJ, Paxton A, Dixon JA. Ann NY Acad Sci 1491(1): 89-105, 2021). Results reveal that the presence of intensity peaks in the acoustic signal co-occur with the time of peak acceleration of legs' biking movements. However, this was not observed when biking with the arms, which corresponded to lower acceleration peaks. In contrast to intensity, F0 was not affected in the arm and leg conditions. These results suggest that 1) the biomechanical entanglements between the respiratory-vocal system and the lower limbs may also impact speech; 2) the physical impulse may have to reach a threshold to impact speech acoustics.NEW & NOTEWORTHY The link between speech and limb motion is an interdisciplinary challenge and a core issue in motor control and language research. Our research aims to disentangle the potential biomechanical links between lower limbs and the speech apparatus, by investigating the effect of leg movements on speech acoustics.


Subject(s)
Leg , Speech , Movement , Arm , Upper Extremity
2.
Ann N Y Acad Sci ; 1505(1): 142-155, 2021 12.
Article in English | MEDLINE | ID: mdl-34418103

ABSTRACT

Breathing is variable but also highly individual. Since the 1980s, evidence of a ventilatory personality has been observed in different physiological studies. This original term refers to within-speaker consistency in breathing characteristics across days or even years. Speech breathing is a specific way to control ventilation while supporting speech planning and phonation constraints. It is highly variable between speakers but also for the same speaker, depending on utterance properties, bodily actions, and the context of an interaction. Can we yet still observe consistency over time in speakers' breathing profiles despite these variations? We addressed this question by analyzing the breathing profiles of 25 native speakers of German performing a narrative task on 2 days under different limb movement conditions. The individuality of breathing profiles over conditions and days was assessed by adopting methods used in physiological studies that investigated a ventilatory personality. Our results suggest that speaker-specific breathing profiles in a narrative task are maintained over days and that they stay consistent despite light physical activity. These results are discussed with a focus on better understanding what speech breathing individuality is, how it can be assessed, and the types of research perspectives that this concept opens up.


Subject(s)
Exercise Test/trends , Extremities/physiology , Movement/physiology , Psychomotor Performance/physiology , Respiratory Mechanics/physiology , Speech/physiology , Adult , Biomechanical Phenomena/physiology , Exercise Test/methods , Female , Humans , Male , Speech Acoustics , Young Adult
3.
Cognition ; 192: 103973, 2019 11.
Article in English | MEDLINE | ID: mdl-31252327

ABSTRACT

Reading acquisition is strongly intertwined with phoneme awareness that relies on implicit phoneme representations. We asked whether phoneme representations emerge before literacy. We recruited two groups of children, 4 to 5-year-old preschoolers (N = 29) and 7 to 8-year-old schoolchildren (N = 24), whose phonological awareness was evaluated, and one adult control group (N = 17). We altered speakers' auditory feedback in real time to elicit persisting pronunciation changes, referred to as auditory-motor adaptation or learning. Assessing the transfer of learning at phoneme level enabled us to investigate the developmental time-course of phoneme representations. Significant transfer at phoneme level occurred in preschoolers, as well as schoolchildren and adults. In addition, we found a relationship between auditory-motor adaptation and phonological awareness in both groups of children. Overall, these results suggest that phoneme representations emerge before literacy acquisition, and that these sensorimotor representations may set the ground for phonological awareness.


Subject(s)
Language Development , Phonetics , Transfer, Psychology , Adaptation, Physiological , Child , Child, Preschool , Humans , Language Tests , Literacy , Speech , Speech Perception
4.
Exp Brain Res ; 236(12): 3191-3201, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30191261

ABSTRACT

Direct touch finger interaction on a smartphone or a tablet is now ubiquitous. However, the latency inherent in digital computation produces an average feedback delay of ~ 75 ms between the action of the hand and its visible effect on digital content. This delay has been shown to affect users' performance, but it is unclear whether users adapt to this delay and whether it influences skill learning. Previous work studied adaptation to feedback delays but only for longer delays, with hidden hand or indirect devices. This paper addresses adaptation to touchscreen delay in two empirical studies involving the tracking of a target moving along an elliptical path. Participants were trained for the task either at the minimal delay the system allows (~ 9 ms) or at a longer delay equivalent to commercialized touch devices latencies (75 ms). After 10 training sessions over a minimum of 2 weeks (Experiment 1), participants adapt to the delay. They also display long-term retention 7 weeks after the last training session. This adaptation generalizes to a similar tracking path (e.g., infinity symbol). We also observed generalization of learning from the longer delay to the minimal-delay condition (Experiment 2). The delay thus does not prevent the learning of tracking skill, which suggests that delay adaptation and tracking skill could be two separate components of learning.


Subject(s)
Adaptation, Psychological/physiology , Feedback, Sensory/physiology , Touch/physiology , Vision, Ocular/physiology , Adult , Algorithms , Female , Fingers/physiology , Generalization, Psychological , Humans , Learning , Male , Middle Aged , Motor Skills/physiology , Psychomotor Performance/physiology , Reaction Time
5.
J Speech Lang Hear Res ; 61(7): 1613-1625, 2018 07 13.
Article in English | MEDLINE | ID: mdl-29931285

ABSTRACT

Purpose: Words, syllables, and phonemes have each been regarded as basic encoding units of speech production in various psycholinguistic models. The present article investigates the role of each unit in the interface with speech articulation, using a paradigm from motor control research. Method: Seventy-six native speakers of French were trained to change their production of /be/ in response to an auditory feedback perturbation (auditory-motor learning). We then assessed the magnitude of learning transfer from /be/ to the syllables in 2 pseudowords (/bepe/ and /pebe/) and 1 real word (/bebe/) as well as the aftereffect on the same utterance (/be/) with a between-subjects design. This made it possible to contrast the amplitude of transfer at the levels of the utterance, the syllable, and the phoneme, depending on the position in the word. Linear mixed models allowed us to study the amplitude as well as the dynamics of the transfer and the aftereffect over trials. Results: Transfer from the training utterance /be/ was observed for all vowels of the test utterances but was larger to the syllable /be/ than to the syllable /pe/ at word-initial position and larger to the 1st syllable than to the 2nd syllable in the utterance. Conclusions: Our study suggests that words, syllables, and phonemes may all contribute to the definition of speech motor commands. In addition, the observation of a serial order effect raises new questions related to the connection between psycholinguistic models and speech motor control approaches.


Subject(s)
Phonetics , Speech/physiology , Transfer, Psychology/physiology , Verbal Learning/physiology , Adult , Female , France , Humans , Language , Male , Psycholinguistics , Speech Production Measurement
6.
J Speech Lang Hear Res ; 61(4): 957-972, 2018 04 17.
Article in English | MEDLINE | ID: mdl-29635399

ABSTRACT

Purpose: This work evaluates whether seeing the speaker's face could improve the speech intelligibility of adults with Down syndrome (DS). This is not straightforward because DS induces a number of anatomical and motor anomalies affecting the orofacial zone. Method: A speech-in-noise perception test was used to evaluate the intelligibility of 16 consonants (Cs) produced in a vowel-consonant-vowel context (Vo = /a/) by 4 speakers with DS and 4 control speakers. Forty-eight naïve participants were asked to identify the stimuli in 3 modalities: auditory (A), visual (V), and auditory-visual (AV). The probability of correct responses was analyzed, as well as AV gain, confusions, and transmitted information as a function of modality and phonetic features. Results: The probability of correct response follows the trend AV > A > V, with smaller values for the DS than the control speakers in A and AV but not in V. This trend depended on the C: the V information particularly improved the transmission of place of articulation and to a lesser extent of manner, whereas voicing remained specifically altered in DS. Conclusions: The results suggest that the V information is intact in the speech of people with DS and improves the perception of some phonetic features in Cs in a similar way as for control speakers. This result has implications for further studies, rehabilitation protocols, and specific training of caregivers. Supplemental Material: https://doi.org/10.23641/asha.6002267.


Subject(s)
Down Syndrome/psychology , Facial Recognition , Phonetics , Speech Perception , Adult , Female , Humans , Male , Speech Intelligibility , Young Adult
7.
Front Psychol ; 8: 1689, 2017.
Article in English | MEDLINE | ID: mdl-29062287

ABSTRACT

Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

8.
Philos Trans R Soc Lond B Biol Sci ; 369(1658): 20130399, 2014 Dec 19.
Article in English | MEDLINE | ID: mdl-25385777

ABSTRACT

Physiological rhythms are sensitive to social interactions and could contribute to defining social rhythms. Nevertheless, our knowledge of the implications of breathing in conversational turn exchanges remains limited. In this paper, we addressed the idea that breathing may contribute to timing and coordination between dialogue partners. The relationships between turns and breathing were analysed in unconstrained face-to-face conversations involving female speakers. No overall relationship between breathing and turn-taking rates was observed, as breathing rate was specific to the subjects' activity in dialogue (listening versus taking the turn versus holding the turn). A general inter-personal coordination of breathing over the whole conversation was not evident. However, specific coordinative patterns were observed in shorter time-windows when participants engaged in taking turns. The type of turn-taking had an effect on the respective coordination in breathing. Most of the smooth and interrupted turns were taken just after an inhalation, with specific profiles of alignment to partner breathing. Unsuccessful attempts to take the turn were initiated late in the exhalation phase and with no clear inter-personal coordination. Finally, breathing profiles at turn-taking were different than those at turn-holding. The results support the idea that breathing is actively involved in turn-taking and turn-holding.


Subject(s)
Interpersonal Relations , Periodicity , Respiratory Mechanics/physiology , Verbal Behavior/physiology , Adult , Biomechanical Phenomena , Female , Germany , Humans , Linear Models , Speech Perception/physiology , Speech Production Measurement , Time Factors
9.
J Neurosci ; 34(31): 10339-46, 2014 Jul 30.
Article in English | MEDLINE | ID: mdl-25080594

ABSTRACT

Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds.


Subject(s)
Auditory Pathways/physiology , Feedback, Sensory/physiology , Motor Activity/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Speech Production Measurement , Vocabulary , Young Adult
10.
Front Psychol ; 4: 906, 2013.
Article in English | MEDLINE | ID: mdl-24367344

ABSTRACT

The current paper extends previous work on breathing during speech perception and provides supplementary material regarding the hypothesis that adaptation of breathing during perception "could be a basis for understanding and imitating actions performed by other people" (Paccalin and Jeannerod, 2000). The experiments were designed to test how the differences in reader breathing due to speaker-specific characteristics, or differences induced by changes in loudness level or speech rate influence the listener breathing. Two readers (a male and a female) were pre-recorded while reading short texts with normal and then loud speech (both readers) or slow speech (female only). These recordings were then played back to 48 female listeners. The movements of the rib cage and abdomen were analyzed for both the readers and the listeners. Breathing profiles were characterized by the movement expansion due to inhalation and the duration of the breathing cycle. We found that both loudness and speech rate affected each reader's breathing in different ways. Listener breathing was different when listening to the male or the female reader and to the different speech modes. However, differences in listener breathing were not systematically in the same direction as reader differences. The breathing of listeners was strongly sensitive to the order of presentation of speech mode and displayed some adaptation in the time course of the experiment in some conditions. In contrast to specific alignments of breathing previously observed in face-to-face dialog, no clear evidence for a listener-reader alignment in breathing was found in this purely auditory speech perception task. The results and methods are relevant to the question of the involvement of physiological adaptations in speech perception and to the basic mechanisms of listener-speaker coupling.

11.
J Neurophysiol ; 107(6): 1711-7, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22190628

ABSTRACT

Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning.


Subject(s)
Generalization, Psychological , Speech , Transfer, Psychology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Speech Acoustics , Speech Perception
12.
J Neurosci ; 31(7): 2657-62, 2011 Feb 16.
Article in English | MEDLINE | ID: mdl-21325534

ABSTRACT

The brain easily generates the movement that is needed in a given situation. Yet surprisingly, the results of experimental studies suggest that it is difficult to acquire more than one skill at a time. To do so, it has generally been necessary to link the required movement to arbitrary cues. In the present study, we show that speech motor learning provides an informative model for the acquisition of multiple sensorimotor skills. During training, subjects were required to repeat aloud individual words in random order while auditory feedback was altered in real-time in different ways for the different words. We found that subjects can quite readily and simultaneously modify their speech movements to correct for these different auditory transformations. This multiple learning occurs effortlessly without explicit cues and without any apparent awareness of the perturbation. The ability to simultaneously learn several different auditory-motor transformations is consistent with the idea that, in speech motor learning, the brain acquires instance-specific memories. The results support the hypothesis that speech motor learning is fundamentally local.


Subject(s)
Feedback, Sensory/physiology , Learning/physiology , Motor Skills/physiology , Movement/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Analysis of Variance , Cues , Female , Humans , Male , Vocabulary , Young Adult
13.
J Speech Lang Hear Res ; 51(6): 1507-21, 2008 Dec.
Article in English | MEDLINE | ID: mdl-18695015

ABSTRACT

PURPOSE: This article investigates jaw-finger coordination in a task involving pointing to a target while naming it with a CVCV (e.g., /papa/) versus CVCV (e.g., /papa/) word. According to the authors' working hypothesis, the pointing apex (gesture extremum) would be synchronized with the apex of the jaw-opening gesture corresponding to the stressed syllable. METHOD: Jaw and finger motions were recorded using Optotrak (Northern Digital, Waterloo, Ontario, Canada). The effects of stress position on jaw-finger coordination were tested across different target positions (near vs. far) and different consonants in the target word (/t/ vs. /p/). Twenty native Portuguese Brazilian speakers participated in the experiment (all conditions). RESULTS: Jaw response starts earlier, and finger-target alignment period is longer for CVCV words than for CVCV ones. The apex of the jaw-opening gesture for the stressed syllable appears synchronized with the onset of the finger-target alignment period (corresponding to the pointing apex) for CVCV words and with the offset of that period for CVCV words. CONCLUSIONS: For both stress conditions, the stressed syllable occurs within the finger-target alignment period because of tight finger-jaw coordination. This result is interpreted as evidence for an anchoring of the speech deictic site (part of speech that shows) in the pointing gesture.


Subject(s)
Fingers/physiology , Jaw/physiology , Movement/physiology , Psychomotor Performance/physiology , Speech/physiology , Adolescent , Adult , Female , Humans , Male , Phonetics , Time Factors
14.
J Acoust Soc Am ; 121(6): 3740-54, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17552724

ABSTRACT

This paper investigates the coordination between the jaw, the tongue tip, and the lower lip during repetition with rate increase of labial-to-coronal (L(a)C(o)) consonant-vowel-consonant-vowel disyllables (e.g., /pata/) and coronal-to-labial (C(o)L(a)) ones (e.g., /tapa/) by French speakers. For the two types of disyllables: (1) the speeding process induces a shift from two jaw cycles per disyllable to a single cycle; (2) this shift modifies the coordination between the jaw and the constrictors, and (3) comes with a progression toward either a L(a)C(o) attractor [e.g., (/pata/ or /tapa/) --> /patá/ --> /ptá/] or a C(o)L(a) one (e.g., /pata/ or /tapa/ --> /tapá/ --> /tpá/). Yet, (4) the L(a)C(o) attractor is clearly favored regardless of the initial sequencing. These results are interpreted as evidence that a L(a)C(o) CVCV disyllable could be a more stable coordinative pattern for the lip-tongue-jaw motor system than a C(o)L(a) one. They are discussed in relation with the so-called LC effect that is the preference for L(a)C(o) associations rather than C(o)L(a) ones in CV.CV disyllables in both world languages and infants' first words.


Subject(s)
Lip/physiology , Speech/physiology , Tongue/physiology , France , Humans , Jaw/anatomy & histology , Jaw/physiology , Language , Models, Biological , Software , Sound , Speech Articulation Tests , Tongue/anatomy & histology
SELECTION OF CITATIONS
SEARCH DETAIL