Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cereb Cortex ; 30(11): 5821-5829, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32537630

RESUMO

How do humans compute approximate number? According to one influential theory, approximate number representations arise in the intraparietal sulcus and are amodal, meaning that they arise independent of any sensory modality. Alternatively, approximate number may be computed initially within sensory systems. Here we tested for sensitivity to approximate number in the visual system using steady state visual evoked potentials. We recorded electroencephalography from humans while they viewed dotclouds presented at 30 Hz, which alternated in numerosity (ranging from 10 to 20 dots) at 15 Hz. At this rate, each dotcloud backward masked the previous dotcloud, disrupting top-down feedback to visual cortex and preventing conscious awareness of the dotclouds' numerosities. Spectral amplitude at 15 Hz measured over the occipital lobe (Oz) correlated positively with the numerical ratio of the stimuli, even when nonnumerical stimulus attributes were controlled, indicating that subjects' visual systems were differentiating dotclouds on the basis of their numerical ratios. Crucially, subjects were unable to discriminate the numerosities of the dotclouds consciously, indicating the backward masking of the stimuli disrupted reentrant feedback to visual cortex. Approximate number appears to be computed within the visual system, independently of higher-order areas, such as the intraparietal sulcus.


Assuntos
Potenciais Evocados Visuais/fisiologia , Conceitos Matemáticos , Córtex Visual/fisiologia , Adulto , Estado de Consciência/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Percepção Visual/fisiologia
3.
Proc Natl Acad Sci U S A ; 114(24): 6352-6357, 2017 06 13.
Artigo em Inglês | MEDLINE | ID: mdl-28559320

RESUMO

Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.


Assuntos
Língua de Sinais , Córtex Visual/fisiologia , Adulto , Córtex Cerebral/fisiologia , Eletroencefalografia , Fenômenos Eletrofisiológicos , Feminino , Humanos , Testes de Linguagem , Masculino , Estimulação Luminosa , Gravação em Vídeo , Percepção Visual/fisiologia , Adulto Jovem
4.
Behav Res Methods ; 52(4): 1744-1767, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32185639

RESUMO

Many studies use manual action verbs to test whether people use neural systems for controlling manual actions to understand language about those actions. Yet, few of these studies empirically establish how people use their hands to perform the actions described by those verbs, relying instead on explicit self-report measures. Here, participants pantomimed the manual actions described by a large set of Dutch (N = 251) and English (N = 250) verbs, allowing us to approximate the extent to which people use each of their hands to perform these actions. After the pantomime task, participants also provided explicit ratings of each of these actions. The results from the pantomime task showed that most manual actions cannot be described accurately as either "unimanual" or "bimanual." With a few exceptions, unimanual action verbs do not describe actions that are performed with only one hand, and bimanual verbs do not describe actions that are performed by using both hands equally. Instead, individual actions vary continuously in the extent to which people use their non-dominant hand to perform them, and in the extent to which people consistently prefer one hand or the other to perform them. Finally, by comparing participants' implicit behavior to their explicit ratings, we found that participants' self-report showed only limited correspondence with their observed motor behavior. We provide all of our measures in both raw and summary format, offering researchers a precision tool for constructing stimulus sets for experiments on embodied cognition.


Assuntos
Idioma , Desempenho Psicomotor , Cognição , Compreensão , Mãos , Humanos
5.
Psychol Sci ; 25(6): 1256-61, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24899170

RESUMO

People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be "high" or "low" (i.e., height-pitch association), whereas in other languages, pitches are described as "thin" or "thick" (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people's nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants' sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.


Assuntos
Percepção da Altura Sonora/fisiologia , Atenção/fisiologia , Comparação Transcultural , Feminino , Humanos , Lactente , Idioma , Desenvolvimento da Linguagem , Masculino , Psicofísica/métodos , Percepção da Fala/fisiologia
6.
Psychol Sci ; 25(9): 1682-90, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25052830

RESUMO

In Arabic, as in many languages, the future is "ahead" and the past is "behind." Yet in the research reported here, we showed that Arabic speakers tend to conceptualize the future as behind and the past as ahead of them, despite using spoken metaphors that suggest the opposite. We propose a new account of how space-time mappings become activated in individuals' minds and entrenched in their cultures, the temporal-focus hypothesis: People should conceptualize either the future or the past as in front of them to the extent that their culture (or subculture) is future oriented or past oriented. Results support the temporal-focus hypothesis, demonstrating that the space-time mappings in people's minds are conditioned by their cultural attitudes toward time, that they depend on attentional focus, and that they can vary independently of the space-time mappings enshrined in language.


Assuntos
Árabes/etnologia , Cultura , Idioma , Metáfora , Tempo , População Branca/etnologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Comparação Transcultural , Feminino , Previsões , Humanos , Masculino , Pessoa de Meia-Idade , Marrocos/etnologia , Espanha , Adulto Jovem
7.
Cognition ; 250: 105855, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38865912

RESUMO

People are more likely to gesture when their speech is disfluent. Why? According to an influential proposal, speakers gesture when they are disfluent because gesturing helps them to produce speech. Here, we test an alternative proposal: People may gesture when their speech is disfluent because gestures serve as a pragmatic signal, telling the listener that the speaker is having problems with speaking. To distinguish between these proposals, we tested the relationship between gestures and speech disfluencies when listeners could see speakers' gestures and when they were prevented from seeing their gestures. If gesturing helps speakers to produce words, then the relationship between gesture and disfluency should persist regardless of whether gestures can be seen. Alternatively, if gestures during disfluent speech are pragmatically motivated, then the tendency to gesture more when speech is disfluent should disappear when the speaker's gestures are invisible to the listener. Results showed that speakers were more likely to gesture when their speech was disfluent, but only when the listener could see their gestures and not when the listener was prevented from seeing them, supporting a pragmatic account of the relationship between gestures and disfluencies. People tend to gesture more when speaking is difficult, not because gesturing facilitates speech production, but rather because gestures comment on the speaker's difficulty presenting an utterance to the listener.


Assuntos
Gestos , Fala , Humanos , Fala/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia
8.
Neuropsychologia ; 196: 108832, 2024 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-38395339

RESUMO

Embodied cognition theories predict a functional involvement of sensorimotor processes in language understanding. In a preregistered experiment, we tested this idea by investigating whether interfering with primary motor cortex (M1) activation can change how people construe meaning from action language. Participants were presented with sentences describing actions (e.g., "turning off the light") and asked to choose between two interpretations of their meaning, one more concrete (e.g., "flipping a switch") and another more abstract (e.g., "going to sleep"). Prior to this task, participants' M1 was disrupted using repetitive transcranial magnetic stimulation (rTMS). The results yielded strong evidence against the idea that M1-rTMS affects meaning construction (BF01 > 30). Additional analyses and control experiments suggest that the absence of effect cannot be accounted for by failure to inhibit M1, lack of construct validity of the task, or lack of power to detect a small effect. In sum, these results do not support a causal role for primary motor cortex in building meaning from action language.


Assuntos
Córtex Motor , Estimulação Magnética Transcraniana , Humanos , Estimulação Magnética Transcraniana/métodos , Córtex Motor/fisiologia , Idioma , Cognição
9.
Psychol Sci ; 24(5): 613-21, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23538914

RESUMO

Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (nazok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers' performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.


Assuntos
Linguística/métodos , Metáfora , Música/psicologia , Discriminação da Altura Tonal/fisiologia , Psicofísica/métodos , Comparação Transcultural , Humanos , Irã (Geográfico) , Idioma , Países Baixos , Aprendizagem Verbal/fisiologia
10.
Cogn Sci ; 47(1): e13239, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36633912

RESUMO

In addition to the many easily observable differences between people, there are also differences in people's subjective experiences that are harder to observe, and which, as a consequence, remain hidden. For example, people vary widely in how much visual imagery they experience. But those who cannot see in their mind's eye, tend to assume everyone is like them. Those who can, assume everyone else can as well. We argue that a study of such hidden phenomenal differences has much to teach cognitive science. Uncovering and describing this variation (a search for unknown unknowns) may help predict otherwise puzzling differences in human behavior. The very existence of certain differences can also act as a stress test for some cognitive theories. Finally, studying hidden phenomenal differences is the first step toward understanding what kinds of environments may mask or unmask links between phenomenal experience and observable behavior.


Assuntos
Cognição , Percepção Visual , Humanos , Ciência Cognitiva
11.
J Cogn Neurosci ; 24(11): 2237-47, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22849399

RESUMO

Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word "grasp" elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59-70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416-423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical-semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance "It is hot here!" in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.


Assuntos
Estimulação Acústica/métodos , Córtex Motor/fisiologia , Rede Nervosa/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Teoria da Mente/fisiologia , Adolescente , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Adulto Jovem
12.
Front Psychol ; 13: 1019957, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36483703

RESUMO

People use space (e.g., left-right, up-down) to think about a variety of non-spatial concepts like time, number, similarity, and emotional valence. These spatial metaphors can be used to inform the design of user interfaces, which visualize many of these concepts in space. Traditionally, researchers have relied on patterns in language to discover habits of metaphorical thinking. However, advances in cognitive science have revealed that many spatial metaphors remain unspoken, shaping people's preferences, memories, and actions independent of language - and even in contradiction to language. Here we argue that cognitive science can impact our everyday lives by informing the design of physical and digital objects via the spatial metaphors in people's minds. We propose a simple principle for predicting which spatial metaphors organize people's non-spatial concepts based on the structure of their linguistic, cultural, and bodily experiences. By leveraging the latent metaphorical structure of people's minds, we can design objects and interfaces that help people think.

13.
Cogn Sci ; 46(2): e13108, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35174896

RESUMO

According to proponents of the generalized magnitude system proposal (GMS), SNARC-like effects index spatial mappings of magnitude and provide crucial evidence for the existence of a GMS. Casasanto and Pitt (2019) have argued that these effects, instead, reflect mappings of ordinality, which people compute on the basis of differences among stimuli that vary either qualitatively (e.g., musical pitches) or quantitatively (e.g., dots of different sizes). In response to our paper, Prpic et al. (2021) argued that both magnitude and ordinality play a role in SNARC-like effects. Here, we address each of their arguments and conclude that magnitude is relevant to these effects only insofar as it serves as a basis for ordinality. For this reason and others, SNARC or SNARC-like effects cannot provide evidence for the putative generalized magnitude system.


Assuntos
Percepção Espacial , Humanos , Percepção Espacial/fisiologia
14.
J Exp Psychol Gen ; 151(6): 1252-1271, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34855443

RESUMO

Why do people gesture when they speak? According to one influential proposal, the Lexical Retrieval Hypothesis (LRH), gestures serve a cognitive function in speakers' minds by helping them find the right spatial words. Do gestures also help speakers find the right words when they talk about abstract concepts that are spatialized metaphorically? If so, then preventing people from gesturing should increase the rate of disfluencies during speech about both literal and metaphorical space. Here, we sought to conceptually replicate the finding that preventing speakers from gesturing increases disfluencies in speech with literal spatial content (e.g., the rocket went up), which has been interpreted as evidence for the LRH, and to extend this pattern to speech with metaphorical spatial content (e.g., my grades went up). Across three measures of speech disfluency (disfluency rate, speech rate, and rate of nonjuncture filled pauses), we found no difference in disfluency between speakers who were allowed to gesture freely and speakers who were not allowed to gesture, for any category of speech (literal spatial content, metaphorical spatial content, and no spatial content). This large dataset (7,969 phrases containing 2,075 disfluencies) provided no support for the idea that gestures help speakers find the right words, even for speech with literal spatial content. Upon reexamining studies cited as evidence for the LRH and related proposals over the past 5 decades, we conclude that there is, in fact, no reliable evidence that preventing gestures impairs speaking. Together, these findings challenge long-held beliefs about why people gesture when they speak. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Gestos , Fala , Humanos , Metáfora
15.
Psychol Sci ; 22(4): 419-22, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21389336

RESUMO

Right- and left-handers implicitly associate positive ideas like "goodness" and "honesty" more strongly with their dominant side of space, the side on which they can act more fluently, and negative ideas more strongly with their nondominant side. Here we show that right-handers' tendency to associate "good" with "right" and "bad" with "left" can be reversed as a result of both long- and short-term changes in motor fluency. Among patients who were right-handed prior to unilateral stroke, those with disabled left hands associated "good" with "right," but those with disabled right hands associated "good" with "left," as natural left-handers do. A similar pattern was found in healthy right-handers whose right or left hand was temporarily handicapped in the laboratory. Even a few minutes of acting more fluently with the left hand can change right-handers' implicit associations between space and emotional valence, causing a reversal of their usual judgments. Motor experience plays a causal role in shaping abstract thought.


Assuntos
Formação de Conceito , Lateralidade Funcional , Destreza Motora , Hemiplegia/fisiopatologia , Hemiplegia/psicologia , Humanos , Plasticidade Neuronal , Paresia/fisiopatologia , Paresia/psicologia , Acidente Vascular Cerebral/fisiopatologia , Acidente Vascular Cerebral/psicologia
16.
Psychol Sci ; 22(7): 849-54, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21705521

RESUMO

Does language comprehension depend, in part, on neural systems for action? In previous studies, motor areas of the brain were activated when people read or listened to action verbs, but it remains unclear whether such activation is functionally relevant for comprehension. In the experiments reported here, we used off-line theta-burst transcranial magnetic stimulation to investigate whether a causal relationship exists between activity in premotor cortex and action-language understanding. Right-handed participants completed a lexical decision task, in which they read verbs describing manual actions typically performed with the dominant hand (e.g., "to throw," "to write") and verbs describing nonmanual actions (e.g., "to earn," "to wander"). Responses to manual-action verbs (but not to nonmanual-action verbs) were faster after stimulation of the hand area in left premotor cortex than after stimulation of the hand area in right premotor cortex. These results suggest that premotor cortex has a functional role in action-language understanding.


Assuntos
Compreensão/fisiologia , Idioma , Córtex Motor/fisiologia , Estimulação Magnética Transcraniana , Adulto , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Movimento/fisiologia , Tempo de Reação , Adulto Jovem
17.
Sci Adv ; 7(33)2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34380617

RESUMO

In industrialized groups, adults implicitly map numbers, time, and size onto space according to cultural practices like reading and counting (e.g., from left to right). Here, we tested the mental mappings of the Tsimane', an indigenous population with few such cultural practices. Tsimane' adults spatially arranged number, size, and time stimuli according to their relative magnitudes but showed no directional bias for any domain on any spatial axis; different mappings went in different directions, even in the same participant. These findings challenge claims that people have an innate left-to-right mapping of numbers and that these mappings arise from a domain-general magnitude system. Rather, the direction-specific mappings found in industrialized cultures may originate from direction-agnostic mappings that reflect the correlational structure of the natural world.

18.
eNeuro ; 8(4)2021.
Artigo em Inglês | MEDLINE | ID: mdl-34341067

RESUMO

How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.


Assuntos
Eletroencefalografia , Fala , Atenção , Encéfalo , Humanos , Periodicidade
19.
J Cogn Neurosci ; 22(10): 2387-400, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19925195

RESUMO

According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI to test whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery. Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showed effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of "mental simulation" should be distinguished from "mental imagery" in embodied theories of language.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Imaginação/fisiologia , Idioma , Comportamento Verbal/fisiologia , Adulto , Córtex Cerebral/irrigação sanguínea , Potencial Evocado Motor/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Movimento/fisiologia , Oxigênio/sangue , Estimulação Luminosa/métodos , Desempenho Psicomotor , Tempo de Reação/fisiologia
20.
Psychol Sci ; 21(1): 67-74, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20424025

RESUMO

According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one's own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Right-handers preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and left-handers, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.


Assuntos
Imagem Ecoplanar , Lobo Frontal/fisiologia , Lateralidade Funcional/fisiologia , Processamento de Imagem Assistida por Computador , Imaginação/fisiologia , Imageamento por Ressonância Magnética , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Leitura , Semântica , Inconsciente Psicológico , Adulto , Mapeamento Encefálico , Tomada de Decisões/fisiologia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA