Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
1.
J Exp Psychol Gen ; 153(7): 1904-1919, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38842887

RESUMO

The ecology of human communication is face to face. In these contexts, speakers dynamically modify their communication across vocal (e.g., speaking rate) and gestural (e.g., cospeech gestures related in meaning to the content of speech) channels while speaking. What is the function of these adjustments? Here we ask whether speakers dynamically make these adjustments to increase communicative success, and decrease cognitive effort while speaking. We assess whether speakers modulate word durations and produce iconic (i.e., imagistically evoking properties of referents) gestures depending on the predictability of each word they utter. Predictability is operationalized as surprisal and computed from computational language models trained on corpora of child-directed, or adult-directed language. Using data from a novel corpus (Ecological Language Corpus) of naturalistic interactions between adult-child (aged 3-4), and adult-adult, we show that surprisal predicts speakers' multimodal adjustments and that some of these effects are modulated by whether the comprehender is a child or an adult. Thus, communicative efficiency applies generally across vocal and gestural communicative channels not being limited to structural properties of language or vocal modality. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Gestos , Humanos , Adulto , Feminino , Masculino , Pré-Escolar , Fala/fisiologia , Idioma , Comunicação
2.
Child Dev ; 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38563146

RESUMO

Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced. The analysis of the range of vocal, manual, and looking behaviors caregivers produced suggests that caregivers used iconic cues especially in displaced contexts and for unfamiliar objects, using other cues when objects were present.

3.
Cogn Sci ; 47(11): e13382, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38010057

RESUMO

Iconicity refers to a resemblance between word form and meaning. Previous work has shown that iconic words are learned earlier and processed faster. Here, we examined whether iconic words are recognized better on a recognition memory task. We also manipulated the level at which items were encoded-with a focus on either their meaning or their form-in order to gain insight into the mechanism by which iconicity would affect memory. In comparison with non-iconic words, iconic words were associated with a higher false alarm rate, a lower d' score, and a lower response criterion in Experiment 1. We did not observe any interaction between iconicity and encoding condition. To test the generalizability of these findings, we examined effects of iconicity in a recognition memory megastudy across 3880 items. After controlling for a variety of lexical and semantic variables, iconicity was predictive of more hits and false alarms, and a lower response criterion in this dataset. In Experiment 2, we examined whether these effects were due to increased feelings of familiarity for iconic items by including a familiar versus recollect decision. This experiment replicated the overall results of Experiment 1 and found that participants were more likely to categorize words that they had seen before as familiar (vs. recollected) if they were iconic. Together, these results demonstrate that iconicity has an effect on memory. We discuss implications for theories of iconicity.


Assuntos
Aprendizagem , Semântica , Humanos , Reconhecimento Psicológico , Rememoração Mental , Emoções
4.
Sci Rep ; 13(1): 20824, 2023 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-38012193

RESUMO

In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues - meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal - while using multimodal cues to a lesser extent than L1 comprehenders overall.


Assuntos
Compreensão , Sinais (Psicologia) , Humanos , Masculino , Feminino , Compreensão/fisiologia , Eletroencefalografia , Potenciais Evocados/fisiologia , Idioma
5.
J Cogn ; 6(1): 63, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37841673

RESUMO

Theories of embodied cognition postulate that perceptual, sensorimotor, and affective properties of concepts support language learning and processing. In this paper, we argue that language acquisition, as well as processing, is situated in addition to being embodied. In particular, first, it is the situated nature of initial language development that affords for the developing system to become embodied. Second, the situated nature of language use changes across development and adulthood. We provide evidence from empirical studies for embodied effects of perception, action, and valence as they apply to both embodied cognition and situated cognition across developmental stages. Although the evidence is limited, we urge researchers to consider differentiating embodied cognition within situated context, in order to better understand how these separate mechanisms interact for learning to occur. This delineation also provides further clarity to the study of classroom-based applications and the role of embodied and situated cognition in the study of developmental disorders. We argue that theories of language acquisition need to address for the complex situated context of real-world learning by completing a "circular notion": observing experimental paradigms in real-world settings and taking these observations to later refine lab-based experiments.

6.
Behav Res Methods ; 2023 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-37604959

RESUMO

Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).

7.
Cortex ; 165: 86-100, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37271014

RESUMO

Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.


Assuntos
Afasia , Percepção da Fala , Humanos , Fala , Compreensão , Afasia/etiologia , Afasia/patologia , Lobo Temporal/patologia
8.
Psychon Bull Rev ; 30(4): 1521-1529, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36520277

RESUMO

In contrast to the principle of arbitrariness, recent work has shown that language can iconically depict referents being talked about. One such example is the maluma/takete effect: an association between certain phonemes (e.g., those in maluma) and round shapes, and other phonemes (e.g., those in takete and spiky shapes). An open question has been whether this association is crossmodal (arising from phonemes' sound or kinesthetics) or unimodal (arising from phonemes' visual appearance). In the latter case, individuals may associate a person's rounded lips as they pronounce the /u/ in maluma with round shapes. We examined this hypothesis by having participants pair nonwords with shapes in either an audio-only condition (they only heard nonwords) or an audiovisual condition (they both heard nonwords and saw them articulated). We found no evidence that seeing nonwords articulated enhanced the maluma/takete effect. In fact, there was evidence that it decreased it in some cases. This was confirmed with a Bayesian analysis. These results eliminate a plausible explanation for the maluma/takete effect, as an instance of visual matching. We discuss the alternate possibility that it involves crossmodal associations.


Assuntos
Audição , Idioma , Humanos , Teorema de Bayes , Som
9.
Dev Sci ; 26(4): e13357, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36464779

RESUMO

Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3-4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers' productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers' mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning. RESEARCH HIGHLIGHTS: In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3-4-year-old children, especially in first mentions. Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent. Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.


Assuntos
Desenvolvimento da Linguagem , Vocabulário , Humanos , Criança , Pré-Escolar , Linguagem Infantil , Aprendizagem Verbal , Idioma , Fala
10.
Philos Trans R Soc Lond B Biol Sci ; 378(1870): 20210357, 2023 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-36571126

RESUMO

Learning in humans is highly embedded in social interaction: since the very early stages of our lives, we form memories and acquire knowledge about the world from and with others. Yet, within cognitive science and neuroscience, human learning is mainly studied in isolation. The focus of past research in learning has been either exclusively on the learner or (less often) on the teacher, with the primary aim of determining developmental trajectories and/or effective teaching techniques. In fact, social interaction has rarely been explicitly taken as a variable of interest, despite being the medium through which learning occurs, especially in development, but also in adulthood. Here, we review behavioural and neuroimaging research on social human learning, specifically focusing on cognitive models of how we acquire semantic knowledge from and with others, and include both developmental as well as adult work. We then identify potential cognitive mechanisms that support social learning, and their neural correlates. The aim is to outline key new directions for experiments investigating how knowledge is acquired in its ecological niche, i.e. socially, within the framework of the two-person neuroscience approach. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.


Assuntos
Interação Social , Aprendizado Social , Adulto , Humanos , Semântica
11.
Psychon Bull Rev ; 29(2): 600-612, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34671936

RESUMO

Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. The task was to decide whether the speech from the video matched a previously seen picture. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication.


Assuntos
Gestos , Percepção da Fala , Compreensão , Humanos , Leitura Labial , Fala
12.
Sci Rep ; 11(1): 22587, 2021 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-34799624

RESUMO

Concrete conceptual knowledge is supported by a distributed neural network representing different semantic features according to the neuroanatomy of sensory and motor systems. If and how this framework applies to abstract knowledge is currently debated. Here we investigated the specific brain correlates of different abstract categories. After a systematic a priori selection of brain regions involved in semantic cognition, i.e. responsible of, respectively, semantic representations and cognitive control, we used a fMRI-adaptation paradigm with a passive reading task, in order to modulate the neural response to abstract (emotions, cognitions, attitudes, human actions) and concrete (biological entities, artefacts) categories. Different portions of the left anterior temporal lobe responded selectively to abstract and concrete concepts. Emotions and attitudes adapted the left middle temporal gyrus, whereas concrete items adapted the left fusiform gyrus. Our results suggest that, similarly to concrete concepts, some categories of abstract knowledge have specific brain correlates corresponding to the prevalent semantic dimensions involved in their representation.


Assuntos
Encéfalo/diagnóstico por imagem , Idioma , Imageamento por Ressonância Magnética/métodos , Adulto , Mapeamento Encefálico , Cognição , Formação de Conceito , Feminino , Humanos , Itália , Conhecimento , Masculino , Leitura , Reprodutibilidade dos Testes , Semântica , Lobo Temporal/fisiologia , Adulto Jovem
13.
Curr Biol ; 31(21): 4853-4859.e3, 2021 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-34525343

RESUMO

Human learning is highly social.1-3 Advances in technology have increasingly moved learning online, and the recent coronavirus disease 2019 (COVID-19) pandemic has accelerated this trend. Online learning can vary in terms of how "socially" the material is presented (e.g., live or recorded), but there are limited data on which is most effective, with the majority of studies conducted on children4-8 and inconclusive results on adults.9,10 Here, we examine how young adults (aged 18-35) learn information about unknown objects, systematically varying the social contingency (live versus recorded lecture) and social richness (viewing the teacher's face, hands, or slides) of the learning episodes. Recall was tested immediately and after 1 week. Experiment 1 (n = 24) showed better learning for live presentation and a full view of the teacher (hands and face). Experiment 2 (n = 27; pre-registered) replicated the live-presentation advantage. Both experiments showed an interaction between social contingency and social richness: the presence of social cues affected learning differently depending on whether teaching was interactive or not. Live social interaction with a full view of the teacher's face provided the optimal setting for learning new factual information. However, during observational learning, social cues may be more cognitively demanding11 and/or distracting,12-14 resulting in less learning from rich social information if there is no interactivity. We suggest that being part of a genuine social interaction catalyzes learning, possibly via mechanisms of joint attention,15 common ground,16 or (inter-)active discussion, and as such, interactive learning benefits from rich social settings.17,18.


Assuntos
Educação a Distância , Interação Social , Adolescente , Adulto , Atenção , Humanos , Rememoração Mental , Adulto Jovem
14.
J Cogn ; 4(1): 38, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34514309

RESUMO

In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.

16.
Proc Biol Sci ; 288(1955): 20210500, 2021 07 28.
Artigo em Inglês | MEDLINE | ID: mdl-34284631

RESUMO

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.


Assuntos
Compreensão , Percepção da Fala , Eletroencefalografia , Potenciais Evocados , Feminino , Gestos , Humanos , Idioma , Masculino , Fala
17.
J Exp Psychol Gen ; 150(11): 2293-2308, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34978840

RESUMO

Iconicity is the property whereby signs (vocal or manual) resemble their referents. Iconic signs are easy to relate to the world, facilitating learning and processing. In this study, we examined whether the benefits of iconicity would lead to its emergence and to maintenance in language. We focused on shape iconicity (the association between rounded objects and round-sounding words like "bouba" and between spiky objects and spiky-sounding words like "kiki") and motion iconicity (the association between longer words and longer events). In Experiment 1, participants generated novel labels for round versus spiky shapes and long versus short movements (Experiment 1a: text, Experiment 1b: speech). Labels for each kind of stimulus differed in a way that was consistent with previous studies of iconicity. This suggests that iconicity emerges even on a completely unconstrainted task. In Experiment 2 (Experiment 2a: text, Experiment 2b: speech), we simulated language change in the laboratory (as iterated learning) and found that both forms of iconicity were introduced and maintained through generations of language users. Thus, we demonstrate the emergence of iconicity in spoken languages, and we argue that these results reflect a pressure for language systems to be referential, which favors iconic forms in the cultural evolution of language (at least up to a point where it is balanced by other pressures, e.g., discriminability). This can explain why we have iconicity across natural languages and may have implications for debates on language origins. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Evolução Cultural , Idioma , Humanos , Desenvolvimento da Linguagem , Aprendizagem , Fala
18.
Neuropsychologia ; 150: 107703, 2021 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-33307100

RESUMO

We investigated the neural basis of newly learned words in Spanish as a mother tongue (L1) and English as a second language (L2). Participants acquired new names for real but unfamiliar concepts in both languages over the course of two days. On day 3, they completed a semantic categorization task during fMRI scanning. The results revealed largely overlapping brain regions for newly learned words in Spanish and English. However, Spanish showed a heightened BOLD response within prefrontal cortex (PFC), due to increased competition of existing lexical representations. In contrast, English displayed higher activity than Spanish within primary auditory cortex, which suggests increased phonological processing due to more irregular phonological-orthographic mappings. Overall, these results suggest that novel words are learned similarly in Spanish L1 and English L2, and that they are represented in largely overlapping brain regions. However, they differ in terms of cognitive control and phonological processes.


Assuntos
Idioma , Multilinguismo , Mapeamento Encefálico , Humanos , Semântica , Aprendizagem Verbal
19.
Dev Sci ; 24(3): e13066, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33231339

RESUMO

A key question in developmental research concerns how children learn associations between words and meanings in their early language development. Given a vast array of possible referents, how does the child know what a word refers to? We contend that onomatopoeia (e.g. knock, meow), where a word's sound evokes the sound properties associated with its meaning, are particularly useful in children's early vocabulary development, offering a link between word and sensory experience not present in arbitrary forms. We suggest that, because onomatopoeia evoke imagery of the referent, children can draw from sensory experience to easily link onomatopoeic words to meaning, both when the referent is present as well as when it is absent. We use two sources of data: naturalistic observations of English-speaking caregiver-child interactions from 14 up to 54 months, to establish whether these words are present early in caregivers' speech to children, and experimental data to test whether English-speaking children can learn from onomatopoeia when it is present. Our results demonstrate that onomatopoeia: (a) are most prevalent in early child-directed language and in children's early productions, (b) are learnt more easily by children compared with non-iconic forms and (c) are used by caregivers in contexts where they can support communication and facilitate word learning.


Assuntos
Desenvolvimento da Linguagem , Simbolismo , Criança , Humanos , Idioma , Aprendizagem Verbal , Vocabulário
20.
Cortex ; 133: 309-327, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33161278

RESUMO

Hand gestures, imagistically related to the content of speech, are ubiquitous in face-to-face communication. Here we investigated people with aphasia's (PWA) processing of speech accompanied by gestures using lesion-symptom mapping. Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only, congruent speech-gesture, and incongruent speech-gesture. Participants' task was to indicate, in different blocks, whether the picture and the word matched (speech task), or whether the picture and the gesture matched (gesture task). Multivariate lesion analysis with Support Vector Regression Lesion-Symptom Mapping (SVR-LSM) showed that benefit for congruent speech-gesture was associated with 1) lesioned voxels in anterior fronto-temporal regions including inferior frontal gyrus (IFG), and sparing of posterior temporal cortex and lateral temporal-occipital regions (pTC/LTO) for the speech task, and 2) conversely, lesions to pTC/LTO and sparing of anterior regions for the gesture task. The two tasks did not share overlapping voxels. Costs from incongruent speech-gesture pairings were associated with lesioned voxels in these same anterior (for the speech task) and posterior (for the gesture task) regions, but crucially, also shared voxels in superior temporal gyrus (STG) and middle temporal gyrus (MTG), including the anterior temporal lobe. These results suggest that IFG and pTC/LTO contribute to extracting semantic information from speech and gesture, respectively; however, they are not causally involved in integrating information from the two modalities. In contrast, regions in anterior STG/MTG are associated with performance in both tasks and may thus be critical to speech-gesture integration. These conclusions are further supported by associations between performance in the experimental tasks and performance in tests assessing lexical-semantic processing and gesture recognition.


Assuntos
Compreensão , Acidente Vascular Cerebral , Mapeamento Encefálico , Gestos , Humanos , Imageamento por Ressonância Magnética , Fala , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/diagnóstico por imagem , Lobo Temporal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...