Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Neuroimage ; 117: 375-85, 2015 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-26044859

RESUMO

The present study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. Experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100 ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. Thus, the same type of integration with a word was present for both prime gesture and word. It was probably successive to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words.


Assuntos
Potencial Evocado Motor/fisiologia , Gestos , Córtex Motor/fisiologia , Medida da Produção da Fala/métodos , Estimulação Magnética Transcraniana/métodos , Comportamento Verbal/fisiologia , Adulto , Feminino , Humanos , Masculino , Desempenho Psicomotor/fisiologia , Priming de Repetição/fisiologia , Semântica , Fatores de Tempo , Adulto Jovem
2.
Brain Topogr ; 28(4): 591-605, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25124860

RESUMO

What happens if you see a person pronouncing the word "go" after having gestured "stop"? Differently from iconic gestures, that must necessarily be accompanied by verbal language in order to be unambiguously understood, symbolic gestures are so conventionalized that they can be effortlessly understood in the absence of speech. Previous studies proposed that gesture and speech belong to a unique communication system. From an electrophysiological perspective the N400 modulation was considered the main variable indexing the interplay between two stimuli. However, while many studies tested this effect between iconic gestures and speech, little is known about the capability of an emblem to modulate the neural response to subsequently presented words. Using high-density EEG, the present study aimed at evaluating the presence of an N400 effect and its spatiotemporal dynamics, in terms of cortical activations, when emblems primed the observation of words. Participants were presented with symbolic gestures followed by a semantically congruent or incongruent verb. A N400 modulation was detected, showing larger negativity when gesture and words were incongruent. The source localization during N400 time window evidenced the activation of different portions of temporal cortex according to the gesture and word congruence. Our data provide further evidence of how the observation of an emblem influences verbal language perception, and of how this interplay is mainly instanced by different portions of the temporal cortex.


Assuntos
Córtex Cerebral/fisiologia , Compreensão/fisiologia , Gestos , Semântica , Adulto , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
3.
Eur J Neurosci ; 39(5): 841-51, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24289090

RESUMO

Request and emblematic gestures, despite being both communicative gestures, do differ in terms of social valence. Indeed, only the former are used to initiate/maintain/terminate an actual interaction. If such a difference is at stake, a relevant social cue, i.e. eye contact, should have different impacts on the neuronal underpinnings of the two types of gesture. We measured blood oxygen level-dependent signals, using functional magnetic resonance imaging, while participants watched videos of an actor, either blindfolded or not, performing emblems, request gestures, or meaningless control movements. A left-lateralized network was more activated by both types of communicative gestures than by meaningless movements, regardless of the accessibility of the actor's eyes. Strikingly, when eye contact was taken into account as a factor, a right-lateralized network was more strongly activated by emblematic gestures performed by the non-blindfolded actor than by those performed by the blindfolded actor. Such modulation possibly reflects the integration of information conveyed by the eyes with the representation of emblems. Conversely, a wider right-lateralized network was more strongly activated by request gestures performed by the blindfolded than by those performed by the non-blindfolded actor. This probably reflects the effect of the conflict between the observed action and its associated contextual information, in which relevant social cues are missing.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Sinais (Psicologia) , Gestos , Relações Interpessoais , Adulto , Feminino , Lateralidade Funcional/fisiologia , Mãos , Humanos , Interpretação de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
4.
Exp Brain Res ; 232(7): 2431-8, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24748482

RESUMO

The present experiment aimed at verifying whether the spatial alignment effect modifies kinematic parameters of pantomimed reaching-grasping of cups located at reachable and not reachable distance. The cup's handle could be oriented either to the right or to the left, thus inducing a grasp movement that could be either congruent or incongruent with the pantomime. The incongruence/congruence induced an increase/decrease in maximal finger aperture, which was observed when the cup was located near but not far from the body. This effect probably depended on influence of the size of the cup body on pantomime control when, in the incongruent condition, cup body was closer to the grasp hand as compared to the handle. Cup distance (near and far) influenced the pantomime even if it was actually executed in the same peripersonal space. Specifically, arm and hand temporal parameters were affected by actual cup distance as well as movement amplitudes. The results indicate that, when executing a reach-to-grasp pantomime, affordance related to the use of the object was instantiated (and in particular the spatial alignment effect became effective), but only when the object could be actually reached. Cup distance (extrinsic object property) influenced affordance, independently of the possibility to actually reach the target.


Assuntos
Força da Mão , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Adulto , Análise de Variância , Fenômenos Biomecânicos , Feminino , Dedos/inervação , Humanos , Masculino , Movimento/fisiologia , Estimulação Luminosa , Adulto Jovem
5.
Cogn Process ; 15(1): 85-92, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24113915

RESUMO

Does the comprehension of both action-related and abstract verbs rely on motor simulation? In a behavioral experiment, in which a semantic task was used, response times to hand-action-related verbs were briefer than those to abstract verbs and both decreased with repetition of presentation. In a transcranial magnetic stimulation (TMS) experiment, single-pulse stimulation was randomly delivered over hand motor area of the left primary motor cortex to measure cortical-spinal excitability at 300 or 500 ms after verb presentation. Two blocks of trials were run. In each block, the same verbs were randomly presented. In the first block, stimulation induced an increase in motor evoked potentials only when TMS was applied 300 ms after action-related verb presentation. In the second block, no modulation of motor cortex was found according to type of verb and stimulation-delay. These results confirm that motor simulation can be used to understand action rather than abstract verbs. Moreover, they suggest that with repetition, the semantic processing for action verbs does not require activation of primary motor cortex anymore.


Assuntos
Compreensão/fisiologia , Potencial Evocado Motor/fisiologia , Tempo de Reação/fisiologia , Semântica , Estimulação Magnética Transcraniana , Estimulação Acústica , Adulto , Análise de Variância , Eletromiografia , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
6.
Exp Brain Res ; 218(4): 539-49, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22411580

RESUMO

The present study aimed at determining whether the observation of two functionally compatible artefacts, that is which potentially concur in achieving a specific function, automatically activates a motor programme of interaction between the two objects. To this purpose, an interference paradigm was used during which an artefact (a bottle filled with orange juice), target of a reaching-grasping and lifting sequence, was presented alone or with a non-target object (distractor) of the same or different semantic category and functionally compatible or not. In experiment 1, the bottle was presented alone or with an artefact (a sphere), or a natural (an apple) distractor. In experiment 2, the bottle was presented with either the apple or a glass (an artefact) filled with orange juice, whereas in experiment 3, either an empty or a filled glass was presented. In the control experiment 4, we compared the kinematics of reaching-grasping and pouring with those of reaching-grasping and lifting. The kinematics of reach, grasp and lift was affected by distractor presentation. However, no difference was observed between two distractors that belonged to different semantic categories. In contrast, the presence of the empty rather filled glass affected the kinematics of the actual grasp. This suggests that an actually functional compatibility between target (the bottle) and distractor (the empty glass) was necessary to activate automatically a programme of interaction (i.e. pouring) between the two artefacts. This programme affected the programme actually executed (i.e. lifting). The results of the present study indicate that, in addition to affordances related to intrinsic object properties, "working affordances" related to a specific use of an artefact with another object can be activated on the basis of functional compatibility.


Assuntos
Função Executiva/fisiologia , Força da Mão/fisiologia , Mãos , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Análise de Variância , Atenção/fisiologia , Fenômenos Biomecânicos , Tomada de Decisões/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
7.
Exp Brain Res ; 203(4): 637-46, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20445966

RESUMO

The present study aimed at verifying whether and why sequences of actions directed to oneself are facilitated when compared to action sequences directed to conspecifics. In experiment 1, participants reached to grasp and brought a piece of food either to their own mouth for self-feeding or to the mouth of a conspecific for feeding. In control conditions, they executed the same sequence to place the piece of food into a mouth-like aperture in a flat container placed upon either their own mouth or the mouth of a conspecific. Kinematic analysis showed that the actions of reaching and bringing were faster when directed to the participant's own body, especially for self-feeding. The data support the hypothesis that reaching to grasp and bringing to one's own body and, in particular, one's own mouth for self-feeding, form an automatic sequence, because this is the result of more frequent execution and coordination between different effectors of one's own body, such as arm and mouth. In contrast, the same sequence directed toward a conspecific is not automatic and requires more accuracy probably because it is guided by social intentions. This hypothesis was supported by the results of control experiment 2 in which we compared the kinematics of reaching to grasp and placing the piece of food into the mouth of a conspecific (i.e. feeding) with those of reaching to grasp and placing the same piece of food into a mouth-like aperture in a human body shape (i.e. placing). Indeed, the entire sequence was slowed down during feeding when compared to placing.


Assuntos
Ingestão de Alimentos , Relações Interpessoais , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Análise de Variância , Fenômenos Biomecânicos , Feminino , Dedos/inervação , Força da Mão/fisiologia , Humanos , Masculino , Boca , Tempo de Reação/fisiologia , Fatores de Tempo , Punho/inervação , Adulto Jovem
8.
Exp Brain Res ; 196(3): 403-12, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19484464

RESUMO

Is the motor system involved in language processing? In order to clarify this issue, we carried out three behavioral experiments, using go-no-go and choice paradigms. In all the experiments, we used a semantic decision task with an early delivery of the go signal (during processing language material). Italian verbs expressing hand actions, foot actions or an abstract content served as stimuli. Participants executed intransitive (Experiment 1) or transitive (Experiment 2) actions with their right hand in response to the acoustic presentation of action-related verbs and refrained from responding to abstract verbs. The kinematics of the actions was slowed down by hand action-related verbs when compared with foot action-related verbs. In Experiment 3, hand-related and foot-related verbs were presented. Participants responded to hand-related and foot-related verbs with their hand and their foot (compatible condition) and in another block of trials they responded to hand-related and foot-related verbs with their foot and their hand (incompatible condition), respectively. In the compatible condition, the beginning of the action was faster, whereas the kinematics of the action was slower. The present findings suggest complete activation of verb-related motor programs during language processing. The data are discussed in support of the hypothesis that this complete activation is necessary requisite to understand the exact meaning of action words because goal and consequence of the actions are represented.


Assuntos
Compreensão/fisiologia , Julgamento/fisiologia , Movimento/fisiologia , Semântica , Comportamento Verbal/fisiologia , Estimulação Acústica , Adulto , Feminino , Pé/fisiologia , Mãos/fisiologia , Humanos , Masculino , Testes Neuropsicológicos , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Adulto Jovem
9.
Neurosci Biobehav Rev ; 32(3): 423-37, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-17976722

RESUMO

Models of the human vision propose a division of labor between vision-for-action (identified with the V1-PPT dorsal stream) and vision-for-perception (the V1-IT ventral stream). The idea has been successful in explaining a host of neuropsychological and behavioral data, but has remained controversial in predicting that visually guided actions should be immune from visual illusions. Here we evaluate this prediction by reanalyzing 33 independent studies of rapid pointing involving the Müller-Lyer or related illusions. We find that illusion effects vary widely across studies from around zero to comparable to perceptual effects. After examining several candidate factors both between and within participants, we show that almost 80% of this variability is explained well by two general concepts. The first is that the illusion has little effect when pointing is programmed from viewing the target rather than from memory. The second that the illusion effect is weakened when participants learn to selectively attend to target locations over repeated trials. These results are largely in accord with the vision-for-action vs. vision-for-perception distinction. However, they also suggest a potential involvement of learning and attentional processes during motor preparation. Whether these are specific to visuomotor mechanisms or shared with vision-for-perception remains to be established.


Assuntos
Ilusões/fisiologia , Memória/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Comportamento Espacial/fisiologia , Atenção/fisiologia , Área de Dependência-Independência , Humanos , Modelos Neurológicos , Tempo de Reação/fisiologia
10.
J Physiol Paris ; 102(1-3): 21-30, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18440209

RESUMO

In the present review we will summarize evidence that the control of spoken language shares the same system involved in the control of arm gestures. Studies of primate premotor cortex discovered the existence of the so-called mirror system as well as of a system of double commands to hand and mouth. These systems may have evolved initially in the context of ingestion, and later formed a platform for combined manual and vocal communication. In humans, manual gestures are integrated with speech production, when they accompany speech. Lip kinematics and parameters of voice spectra during speech production are influenced by executing or observing transitive actions (i.e. guided by an object). Manual actions also play an important role in language acquisition in children, from the babbling stage onwards. Behavioural data reported here even show a reciprocal influence between words and symbolic gestures and studies employing neuroimaging and repetitive transcranial magnetic stimulation (rTMS) techniques suggest that the system governing both speech and gesture is located in Broca's area.


Assuntos
Mãos , Desenvolvimento da Linguagem , Comunicação Manual , Fala , Animais , Mapeamento Encefálico , Gestos , Humanos
11.
Brain Res ; 1218: 166-80, 2008 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-18514170

RESUMO

The present study aimed to determine whether the observation of different grasps of the same object elicits automatic imitation of the kinematics of those grasps and this process influences the estimation of intrinsic target properties. In experiments 1 and 2, participants reached and grasped differently sized spheres after observation of the same objects grasped using two different types of grasp (power and precision grasp) and hand kinematics. The observed grasp kinematics were imitated especially when the vision of the target and the acting hand were precluded. In experiments 3, 4 and 5 participants matched the diameter of the spheres, either perceived or imagined, by opening their thumb and index finger (i.e. the fingers used to grasp the objects) after observation of the two types of grasp. Finger opening was larger after observation of power grasp than precision grasp, consistently with the notion that power grasp and precision grasp are preferentially used to grasp large and small objects, respectively. However, the effect was poorly observed for the small object, this depending on the fact that the participants imitated also the final position of the thumb and index finger, which were closer to each other in the power grasp. Finally, those participants, for whom the effect was stronger, reported to have perceived more differently sized objects than those really presented. The results suggest that imitation evoked by a mirror system is involved in planning how to interact with an object and in the estimation of the properties extracted for sensory-motor integration.


Assuntos
Força da Mão/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto , Análise de Variância , Fenômenos Biomecânicos , Feminino , Dedos/fisiologia , Humanos , Masculino , Tato/fisiologia
12.
Exp Brain Res ; 184(4): 599-603, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-18183374

RESUMO

Upper limb gestures, as well as transitive actions (i.e. acted upon an object) when either executed or observed affect speech. Broca's area seems to be involved in integration between the two motor representations of arm and mouth (Bernardis and Gentilucci, Neuropsychologia, 44:178-190, 2006, Gentilucci et al., Eur J Neurosci, 19:190-202, 2004a, Neuropsychologia, 42:1554-1567, 2004b, J Cogn Neurosci, 18:1059-1074, 2006). The relevance of these data is in relation with the hypothesis that language evolved from manual gestures, and was gradually transformed in speech by means of a system of dual motor commands to hand and mouth (Gentilucci and Corballis, Neurosci Biobehav, Rev 30:949-960, 2006). The present study aimed to verify whether this system of integration between gestures (and transitive actions) and speech is involved also in the language development of infants. Vocalizations of infants aged between 11 and 13 months were recorded during both manipulation of objects of different size and request arm gestures towards the same objects presented by the experimenter. Frequency in voice spectra increased when the infants manipulated or gestured to large objects in comparison with the same activities directed to small objects. These data suggest that intrinsic properties of an object evoking commands of manual interaction are used to identify that object, and to communicate.


Assuntos
Lobo Frontal/crescimento & desenvolvimento , Lobo Frontal/fisiologia , Gestos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Feminino , Humanos , Lactente , Desenvolvimento da Linguagem , Masculino , Fala
13.
Cortex ; 100: 95-110, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29079343

RESUMO

Sensorimotor and affective brain systems are known to be involved in language processing. However, to date it is still debated whether this involvement is a crucial step of semantic processing or, on the contrary, it is dependent on the specific context or strategy adopted to solve a task at hand. The present electroencephalographic (EEG) study is aimed at investigating which brain circuits are engaged when processing written verbs. By aligning event-related potentials (ERPs) both to the verb onset and to the motor response indexing the accomplishment of a semantic task of categorization, we were able to dissociate the relative stimulus-related and response-related cognitive components at play, respectively. EEG signal source reconstruction showed that while the recruitment of sensorimotor fronto-parietal circuits was time-locked with action verb onset, a left temporal-parietal circuit was time-locked to the task accomplishment. Crucially, by comparing the time course of both these bottom-up and top-down cognitive components, it appears that the frontal motor involvement precedes the task-related temporal-parietal activity. The present findings suggest that the recruitment of fronto-parietal sensorimotor circuits is independent of the specific strategy adopted to solve a semantic task and, given its temporal hierarchy, it may provide crucial information to brain circuits involved in the categorization task. Eventually, a discussion on how the present results may contribute to the clinical literature on patients affected by disorders specifically impairing the motor system is provided.


Assuntos
Encéfalo/fisiologia , Desempenho Psicomotor/fisiologia , Percepção da Fala/fisiologia , Comportamento Verbal/fisiologia , Adolescente , Adulto , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Movimento/fisiologia , Tempo de Reação , Semântica , Adulto Jovem
14.
Neuropsychologia ; 114: 243-250, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29729959

RESUMO

BACKGROUND: Strong embodiment theories claimed that action language representation is grounded in the sensorimotor system, which would be crucially to semantic understanding. However, there is a large disagreement in literature about the neural mechanisms involved in abstract (symbolic) language comprehension. OBJECTIVE: In the present study, we investigated the role of motor context in the semantic processing of abstract language. We hypothesized that motor cortex excitability during abstract word comprehension could be modulated by previous presentation of a stimuli which associated a congruent motor content (i.e., a semantically related gesture) to the word. METHODS AND RESULTS: We administered a semantic priming paradigm where postures of gestures (primes) were followed by semantically congruent verbal stimuli (targets, meaningful or meaningless words). Transcranial Magnetic Stimulation was delivered to left motor cortex 100, 250 and 500 ms after the presentation of each target. Results showed that Motor evoked potentials of hand muscle significantly increased in correspondence to meaningful compared to meaningless words, but only in the earlier phase of semantic processing (100 and 250 ms from target onset). CONCLUSION: Results suggested that the gestural motor representation was integrated with corresponding word meaning in order to accomplish (and facilitate) the lexical task. We concluded that motor context resulted crucial to highlight motor system involvement during semantic processing of abstract language.


Assuntos
Compreensão , Potencial Evocado Motor/fisiologia , Córtex Motor/fisiologia , Semântica , Estimulação Magnética Transcraniana/métodos , Percepção Visual/fisiologia , Adulto , Análise de Variância , Feminino , Gestos , Humanos , Masculino , Estimulação Luminosa , Desempenho Psicomotor , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
15.
Neuropsychologia ; 45(3): 608-15, 2007 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-16698051

RESUMO

Does listening to and observing the speaking interlocutor influence phoneme production? In two experiments female participants were required to recognize and, then, to repeat the string-of-phonemes /aba/ presented by actors visually, acoustically and audiovisually. In experiment 1 a male actor presented the string-of-phonemes and the participants' lip kinematics and voice spectra were compared with those of a reading control condition. In experiment 2 female and male actors presented the string-of-phonemes and the lip kinematics and the voice spectra of the participants' responses to the male actors were compared with those to the female actors (control condition). In both experiments 1 and 2, the lip kinematics in the visual presentations and the voice spectra in the acoustical presentations changed in the comparison with the control conditions approaching the male actors' values, which were different from those of the female participants and actors. The variation in lip kinematics induced changes also in voice formants but only in the visual presentation. The data suggest that both features of the lip kinematics and of the voice spectra tend to be automatically imitated when repeating a string-of-phonemes presented by a visible and/or audible speaking interlocutor. The use of imitation, in place of the usual lip kinematics and vocal features, suggests an automatic and unconscious tendency of the perceiver to interact closely with the interlocutor. This is in accordance with the idea that resonant circuits are activated by the activity of the mirror system, which relates observation to execution of arm and mouth gestures.


Assuntos
Comportamento Imitativo/fisiologia , Fonética , Fala , Comportamento Verbal , Adulto , Fenômenos Biomecânicos , Feminino , Humanos , Leitura Labial , Masculino , Estimulação Luminosa/métodos , Análise Espectral , Medida da Produção da Fala/métodos , Fatores de Tempo , Percepção Visual/fisiologia , Voz
16.
Front Hum Neurosci ; 11: 565, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29204114

RESUMO

During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a "giving request" (give me) or a "taking request" (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed.

17.
Front Psychol ; 8: 2339, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29403408

RESUMO

It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

18.
Neurosci Biobehav Rev ; 30(7): 949-60, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16620983

RESUMO

There are a number of reasons to suppose that language evolved from manual gestures. We review evidence that the transition from primarily manual to primarily vocal language was a gradual process, and is best understood if it is supposed that speech itself a gestural system rather than an acoustic system, an idea captured by the motor theory of speech perception and articulatory phonology. Studies of primate premotor cortex, and, in particular, of the so-called "mirror system" suggest a double hand/mouth command system that may have evolved initially in the context of ingestion, and later formed a platform for combined manual and vocal communication. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing hand movements, and manual actions also play an important role in the development of speech, from the babbling stage onwards. The final stage at which speech became relatively autonomous may have occurred late in hominid evolution, perhaps with a mutation of the FOXP2 gene around 100,000 years ago.


Assuntos
Gestos , Desenvolvimento da Linguagem , Idioma , Fala/fisiologia , Animais , Humanos , Comportamento Verbal
19.
Neuropsychologia ; 44(2): 178-90, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16005477

RESUMO

Humans speak and produce symbolic gestures. Do these two forms of communication interact, and how? First, we tested whether the two communication signals influenced each other when emitted simultaneously. Participants either pronounced words, or executed symbolic gestures, or emitted the two communication signals simultaneously. Relative to the unimodal conditions, multimodal voice spectra were enhanced by gestures, whereas multimodal gesture parameters were reduced by words. In other words, gesture reinforced word, whereas word inhibited gesture. In contrast, aimless arm movements and pseudo-words had no comparable effects. Next, we tested whether observing word pronunciation during gesture execution affected verbal responses in the same way as emitting the two signals. Participants responded verbally to either spoken words, or to gestures, or to the simultaneous presentation of the two signals. We observed the same reinforcement in the voice spectra as during simultaneous emission. These results suggest that spoken word and symbolic gesture are coded as single signal by a unique communication system. This signal represents the intention to engage a closer interaction with a hypothetical interlocutor and it may have a meaning different from when word and gesture are encoded singly.


Assuntos
Lateralidade Funcional/fisiologia , Gestos , Comunicação Manual , Fala/fisiologia , Comportamento Verbal/fisiologia , Adulto , Análise de Variância , Humanos , Masculino , Valores de Referência
20.
Front Psychol ; 7: 672, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27242586

RESUMO

AIM: Do the emotional content and meaning of sentences affect the kinematics of successive motor sequences? MATERIAL AND METHODS: Participants observed video-clips of an actor pronouncing sentences expressing positive or negative emotions and meanings (related to happiness or anger in Experiment 1 and food admiration or food disgust in Experiment 2). Then, they reached-to-grasp and placed a sugar lump on the actor's mouth. Participants acted in response to sentences whose content could convey (1) emotion (i.e., face expression and prosody) and meaning, (2) meaning alone, or (3) emotion alone. Within each condition, the kinematic effects of sentences expressing positive and negative emotions were compared. Stimuli (positive for food admiration and negative for food disgust), conveyed either by emotion or meaning affected similarly the kinematics of both grasp and reach. RESULTS: In Experiment 1, the kinematics did not vary between positive and negative sentences either when the content was expressed by both emotion and meaning, or meaning alone. In contrast, in the case of sole emotion, sentences with positive valence made faster the approach of the conspecific. In Experiment 2, the valence of emotions (positive for food admiration and negative for food disgust) affected the kinematics of both grasp and reach, independently of the modality. DISCUSSION: The lack of an effect of meaning in Experiment 1 could be due to the weak relevance of sentence meaning with respect to the motor sequence goal (feeding). Experiment 2 demonstrated that, indeed, this was the case, because when the meaning and the consequent emotion were related to the sequence goal, they affected the kinematics. In contrast, the sole emotion activated approach or avoidance toward the actor according to positive and negative valence. The data suggest a behavioral dissociation between effects of emotion and meaning.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa