Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Nat Rev Neurosci ; 20(7): 425-434, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30918365

RESUMO

There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Animais , Humanos
2.
Brain ; 142(3): 808-822, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30698656

RESUMO

Conversation is an important and ubiquitous social behaviour. Individuals with autism spectrum disorder (autism) without intellectual disability often have normal structural language abilities but deficits in social aspects of communication like pragmatics, prosody, and eye contact. Previous studies of resting state activity suggest that intrinsic connections among neural circuits involved with social processing are disrupted in autism, but to date no neuroimaging study has examined neural activity during the most commonplace yet challenging social task: spontaneous conversation. Here we used functional MRI to scan autistic males (n = 19) without intellectual disability and age- and IQ-matched typically developing control subjects (n = 20) while they engaged in a total of 193 face-to-face interactions. Participants completed two kinds of tasks: conversation, which had high social demand, and repetition, which had low social demand. Autistic individuals showed abnormally increased task-driven interregional temporal correlation relative to controls, especially among social processing regions and during high social demand. Furthermore, these increased correlations were associated with parent ratings of participants' social impairments. These results were then compared with previously-acquired resting state data (56 autism, 62 control subjects). While some interregional correlation levels varied by task or rest context, others were strikingly similar across both task and rest, namely increased correlation among the thalamus, dorsal and ventral striatum, somatomotor, temporal and prefrontal cortex in the autistic individuals, relative to the control groups. These results suggest a basic distinction. Autistic cortico-cortical interactions vary by context, tending to increase relative to controls during task and decrease during test. In contrast, striato- and thalamocortical relationships with socially engaged brain regions are increased in both task and rest, and may be core to the condition of autism.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Relações Interpessoais , Comportamento Verbal/fisiologia , Adolescente , Adulto , Transtorno Autístico/fisiopatologia , Encéfalo/fisiopatologia , Mapeamento Encefálico/métodos , Comunicação , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Vias Neurais/fisiopatologia , Córtex Pré-Frontal/fisiopatologia , Descanso , Comportamento Social , Habilidades Sociais , Adulto Jovem
3.
J Neurosci ; 36(17): 4669-80, 2016 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-27122026

RESUMO

UNLABELLED: Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and "live," we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. SIGNIFICANCE STATEMENT: Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing.


Assuntos
Encéfalo/fisiologia , Percepção Social , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino
4.
Psychon Bull Rev ; 30(1): 373-382, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35915382

RESUMO

Segmental speech units such as phonemes are described as multidimensional categories whose perception involves contributions from multiple acoustic input dimensions, and the relative perceptual weights of these dimensions respond dynamically to context. For example, when speech is altered to create an "accent" in which two acoustic dimensions are correlated in a manner opposite that of long-term experience, the dimension that carries less perceptual weight is down-weighted to contribute less in category decisions. It remains unclear, however, whether this short-term reweighting extends to perception of suprasegmental features that span multiple phonemes, syllables, or words, in part because it has remained debatable whether suprasegmental features are perceived categorically. Here, we investigated the relative contribution of two acoustic dimensions to word emphasis. Participants categorized instances of a two-word phrase pronounced with typical covariation of fundamental frequency (F0) and duration, and in the context of an artificial "accent" in which F0 and duration (established in prior research on English speech as "primary" and "secondary" dimensions, respectively) covaried atypically. When categorizing "accented" speech, listeners rapidly down-weighted the secondary dimension (duration). This result indicates that listeners continually track short-term regularities across speech input and dynamically adjust the weight of acoustic evidence for suprasegmental decisions. Thus, dimension-based statistical learning appears to be a widespread phenomenon in speech perception extending to both segmental and suprasegmental categorization.


Assuntos
Acústica da Fala , Percepção da Fala , Humanos , Fonética , Fala , Idioma
5.
Sci Rep ; 13(1): 5303, 2023 03 31.
Artigo em Inglês | MEDLINE | ID: mdl-37002277

RESUMO

It is well-established that individuals with autism exhibit atypical functional brain connectivity. However, the role this plays in naturalistic social settings has remained unclear. Atypical patterns may reflect core deficits or may instead compensate for deficits and promote adaptive behavior. Distinguishing these possibilities requires measuring the 'typicality' of spontaneous behavior and determining how connectivity relates to it. Thirty-nine male participants (19 autism, 20 typically-developed) engaged in 115 spontaneous conversations with an experimenter during fMRI scanning. A classifier algorithm was trained to distinguish participants by diagnosis based on 81 semantic, affective and linguistic dimensions derived from their use of language. The algorithm's graded likelihood of a participant's group membership (autism vs. typically-developed) was used as a measure of task performance and compared with functional connectivity levels. The algorithm accurately classified participants and its scores correlated with clinician-observed autism signs (ADOS-2). In support of a compensatory role, greater functional connectivity between right inferior frontal cortex and left-hemisphere social communication regions correlated with more typical language behavior, but only for the autism group. We conclude that right inferior frontal functional connectivity increases in autism during communication reflect a neural compensation strategy that can be quantified and tested even without an a priori behavioral standard.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos , Masculino , Transtorno Autístico/diagnóstico por imagem , Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Comunicação , Imageamento por Ressonância Magnética/métodos , Hidrolases , Vias Neurais/diagnóstico por imagem
6.
J Exp Psychol Learn Mem Cogn ; 49(12): 1943-1955, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38127498

RESUMO

Languages differ in the importance of acoustic dimensions for speech categorization. This poses a potential challenge for second language (L2) learners, and the extent to which adult L2 learners can acquire new perceptual strategies for speech categorization remains unclear. This study investigated the effects of extensive English L2 immersion on speech perception strategies and dimension-selective-attention ability in native Mandarin speakers. Experienced first language (L1) Mandarin speakers (length of U.K. residence > 3 years) demonstrated more native-like weighting of cues to L2 suprasegmental categorization relative to inexperienced Mandarin speakers (length of residence < 1 year), weighting duration more highly. However, both the experienced and the inexperienced Mandarin speakers continued to weight duration less highly and pitch more highly during musical beat categorization and struggled to ignore pitch and selectively attend to amplitude in speech, relative to native English speakers. These results suggest that adult L2 experience can lead to retuning of perceptual strategies in specific contexts, but global acoustic salience is more resistant to change. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Idioma , Percepção da Fala , Adulto , Humanos , Fala , Sinais (Psicologia) , Atenção , Fonética
7.
J Exp Psychol Gen ; 152(12): 3476-3489, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37616075

RESUMO

Sensorimotor integration during speech has been investigated by altering the sound of a speaker's voice in real time; in response, the speaker learns to change their production of speech sounds in order to compensate (adaptation). This line of research has however been predominantly limited to very simple speaking contexts, typically involving (a) repetitive production of single words and (b) production of speech while alone, without the usual exposure to other voices. This study investigated adaptation to a real-time perturbation of the first and second formants during production of sentences either in synchrony with a prerecorded voice (synchronous speech group) or alone (solo speech group). Experiment 1 (n = 30) found no significant difference in the average magnitude of compensatory formant changes between the groups; however, synchronous speech resulted in increased between-individual variability in such formant changes. Participants also showed acoustic-phonetic convergence to the voice they were synchronizing with prior to introduction of the feedback alteration. Furthermore, the extent to which the changes required for convergence agreed with those required for adaptation was positively correlated with the magnitude of subsequent adaptation. Experiment 2 tested an additional group with a metronome-timed speech task (n = 15) and found a similar pattern of increased between-participant variability in formant changes. These findings demonstrate that speech motor adaptation can be measured robustly at the group level during performance of more complex speaking tasks; however, further work is needed to resolve whether self-voice adaptation and other-voice convergence reflect additive or interactive effects during sensorimotor control of speech. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Percepção da Fala , Voz , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Voz/fisiologia , Fonética , Aprendizagem
8.
Front Neurosci ; 16: 1076374, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36590301

RESUMO

Sound is processed in primate brains along anatomically and functionally distinct streams: this pattern can be seen in both human and non-human primates. We have previously proposed a general auditory processing framework in which these different perceptual profiles are associated with different computational characteristics. In this paper we consider how recent work supports our framework.

9.
Cognition ; 206: 104481, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33075568

RESUMO

Speech and music are highly redundant communication systems, with multiple acoustic cues signaling the existence of perceptual categories. This redundancy makes these systems robust to the influence of noise, but necessitates the development of perceptual strategies: listeners need to decide how much importance to place on each source of information. Prior empirical work and modeling has suggested that cue weights primarily reflect within-task statistical learning, as listeners assess the reliability with which different acoustic dimensions signal a category and modify their weights accordingly. Here we present evidence that perceptual experience can lead to changes in cue weighting that extend across tasks and across domains, suggesting that perceptual strategies reflect both global biases and local (i.e. task-specific) learning. In two experiments, native speakers of Mandarin (N = 45)-where pitch is a crucial cue to word identity-placed more importance on pitch and less importance on other dimensions compared to native speakers of non-tonal languages English (N = 45) and Spanish (N = 27), during the perception of both English speech and musical beats. In a third experiment, we further show that Mandarin speakers are better able to attend to pitch and ignore irrelevant variation in other dimensions in speech compared to English and Spanish speakers, and even struggle to ignore pitch when asked to attend to other dimensions. Thus, an individual's idiosyncratic auditory perceptual strategy reflects a complex mixture of congenital predispositions, task-specific learning, and biases instilled by extensive experience in making use of important dimensions in their native language.


Assuntos
Idioma , Percepção da Fala , Sinais (Psicologia) , Humanos , Percepção da Altura Sonora , Reprodutibilidade dos Testes , Fala
10.
J Exp Psychol Hum Percept Perform ; 47(12): 1681-1697, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34881953

RESUMO

In the speech-to-song illusion, certain spoken phrases are perceived as sung after repetition. One possible explanation for this increase in musicality is that, as phrases are repeated, lexical activation dies off, enabling listeners to focus on the melodic and rhythmic characteristics of stimuli and assess them for the presence of musical structure. Here we tested the idea that perception of the illusion requires implicit assessment of melodic and rhythmic structure by presenting individuals with phrases that tend to be perceived as song when repeated, as well as phrases that tend to continue to be perceived as speech when repeated, measuring the strength of the illusion as the rating difference between these two stimulus categories after repetition. Illusion strength varied widely and stably between listeners, with large individual differences and high split-half reliability, suggesting that not all listeners are equally able to detect musical structure in speech. Although variability in illusion strength was unrelated to degree of musical training, participants who perceived the illusion more strongly were proficient in several musical skills, including beat perception, tonality perception, and selective attention to pitch. These findings support models of the speech-to-song illusion in which experience of the illusion is based on detection of musical characteristics latent in spoken phrases. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Ilusões , Música , Percepção da Fala , Aptidão , Humanos , Individualidade , Percepção da Altura Sonora , Reprodutibilidade dos Testes , Fala
11.
Wellcome Open Res ; 5: 4, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-35282675

RESUMO

Prosody can be defined as the rhythm and intonation patterns spanning words, phrases and sentences. Accurate perception of prosody is an important component of many aspects of language processing, such as parsing grammatical structures, recognizing words, and determining where emphasis may be placed. Prosody perception is important for language acquisition and can be impaired in language-related developmental disorders. However, existing assessments of prosodic perception suffer from some shortcomings.  These include being unsuitable for use with typically developing adults due to ceiling effects, or failing to allow the investigator to distinguish the unique contributions of individual acoustic features such as pitch and temporal cues. Here we present the Multi-Dimensional Battery of Prosody Perception (MBOPP), a novel tool for the assessment of prosody perception. It consists of two subtests: Linguistic Focus, which measures the ability to hear emphasis or sentential stress, and Phrase Boundaries, which measures the ability to hear where in a compound sentence one phrase ends, and another begins. Perception of individual acoustic dimensions (Pitch and Time) can be examined separately, and test difficulty can be precisely calibrated by the experimenter because stimuli were created using a continuous voice morph space. We present validation analyses from a sample of 57 individuals and discuss how the battery might be deployed to examine perception of prosody in various populations.

12.
Elife ; 92020 08 07.
Artigo em Inglês | MEDLINE | ID: mdl-32762842

RESUMO

Individuals with congenital amusia have a lifelong history of unreliable pitch processing. Accordingly, they downweight pitch cues during speech perception and instead rely on other dimensions such as duration. We investigated the neural basis for this strategy. During fMRI, individuals with amusia (N = 15) and controls (N = 15) read sentences where a comma indicated a grammatical phrase boundary. They then heard two sentences spoken that differed only in pitch and/or duration cues and selected the best match for the written sentence. Prominent reductions in functional connectivity were detected in the amusia group between left prefrontal language-related regions and right hemisphere pitch-related regions, which reflected the between-group differences in cue weights in the same groups of listeners. Connectivity differences between these regions were not present during a control task. Our results indicate that the reliability of perceptual dimensions is linked with functional connectivity between frontal and perceptual regions and suggest a compensatory mechanism.


Spoken language is colored by fluctuations in pitch and rhythm. Rather than speaking in a flat monotone, we allow our sentences to rise and fall. We vary the length of syllables, drawing out some, and shortening others. These fluctuations, known as prosody, add emotion to speech and denote punctuation. In written language, we use a comma or a period to signal a boundary between phrases. In speech, we use changes in pitch ­ how deep or sharp a voice sounds ­ or in the length of syllables. Having more than one type of cue that can signal emotion or transitions between sentences has a number of advantages. It means that people can understand each other even when factors such as background noise obscure one set of cues. It also means that people with impaired sound perception can still understand speech. Those with a condition called congenital amusia, for example, struggle to perceive pitch, but they can compensate for this difficulty by placing greater emphasis on other aspects of speech. Jasmin et al. showed how the brain achieves this by comparing the brain activities of people with and without amusia. Participants were asked to read sentences on a screen where a comma indicated a boundary between two phrases. They then heard two spoken sentences, and had to choose the one that matched the written sentence. The spoken sentences used changes in pitch and/or syllable duration to signal the position of the comma. This provided listeners with the information needed to distinguish between "after John runs the race, ..." and "after John runs, the race...", for example. When two brain regions communicate, they tend to increase their activity at around the same time. The brain regions are then said to show functional connectivity. Jasmin et al. found that compared to healthy volunteers, people with amusia showed less functional connectivity between left hemisphere brain regions that process language and right hemisphere regions that process pitch. In other words, because pitch is a less reliable source of information for people with amusia, they recruit pitch-related brain regions less when processing speech. These results add to our understanding of how brains compensate for impaired perception. This may be useful for understanding the neural basis of compensation in other clinical conditions. It could also help us design bespoke hearing aids or other communication devices, such as computer programs that convert text into speech. Such programs could tailor the pitch and rhythm characteristics of the speech they produce to suit the perception of individual users.


Assuntos
Transtornos da Percepção Auditiva/fisiopatologia , Percepção da Fala/fisiologia , Adulto , Idoso , Feminino , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Reino Unido
13.
J Exp Psychol Gen ; 149(5): 914-934, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31589067

RESUMO

Perception involves integration of multiple dimensions that often serve overlapping, redundant functions, for example, pitch, duration, and amplitude in speech. Individuals tend to prioritize these dimensions differently (stable, individualized perceptual strategies), but the reason for this has remained unclear. Here we show that perceptual strategies relate to perceptual abilities. In a speech cue weighting experiment (trial N = 990), we first demonstrate that individuals with a severe deficit for pitch perception (congenital amusics; N = 11) categorize linguistic stimuli similarly to controls (N = 11) when the main distinguishing cue is duration, which they perceive normally. In contrast, in a prosodic task where pitch cues are the main distinguishing factor, we show that amusics place less importance on pitch and instead rely more on duration cues-even when pitch differences in the stimuli are large enough for amusics to discern. In a second experiment testing musical and prosodic phrase interpretation (N = 16 amusics; 15 controls), we found that relying on duration allowed amusics to overcome their pitch deficits to perceive speech and music successfully. We conclude that auditory signals, because of their redundant nature, are robust to impairments for specific dimensions, and that optimal speech and music perception strategies depend not only on invariant acoustic dimensions (the physical signal), but on perceptual dimensions whose precision varies across individuals. Computational models of speech perception (indeed, all types of perception involving redundant cues e.g., vision and touch) should therefore aim to account for the precision of perceptual dimensions and characterize individuals as well as groups. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Transtornos da Percepção Auditiva/fisiopatologia , Música , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Idoso , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
14.
Artigo em Inglês | MEDLINE | ID: mdl-30414457

RESUMO

Autism spectrum disorder (ASD) is characterized by profound impairments in social abilities and by restricted interests and repetitive behaviors. Much work in the past decade has been dedicated to understanding the brain-bases of ASD, and in the context of resting-state functional connectivity fMRI in high-functioning adolescents and adults, the field has established a set of reliable findings: decreased cortico-cortical interactions among brain regions thought to be engaged in social processing, along with a simultaneous increase in thalamo-cortical and striato-cortical interactions. However, few studies have attempted to manipulate these altered patterns, leading to the question of whether such patterns are actually causally involved in producing the corresponding behavioral impairments. We discuss a few such recent attempts in the domains of fMRI neurofeedback and overt social interaction during scanning, and we conclude that the evidence of causal involvement is somewhat mixed. We highlight the potential role of the thalamus and striatum in ASD and emphasize the need for studies that directly compare scanning during multiple cognitive states in addition to the resting-state.


Assuntos
Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno do Espectro Autista/fisiopatologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Transtorno do Espectro Autista/psicologia , Humanos , Imageamento por Ressonância Magnética , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiopatologia , Descanso , Comportamento Social
15.
Cortex ; 93: 146-154, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28654816

RESUMO

Some people who attempt to learn a second language in adulthood meet with greater success than others. The causes driving these individual differences in second language learning skill continue to be debated. In particular, it remains controversial whether robust auditory perception can provide an advantage for non-native speech perception. Here, we tested English speech perception in native Japanese speakers through the use of frequency following responses, the evoked gamma band response, and behavioral measurements. Participants whose neural responses featured less timing jitter from trial to trial performed better on perception of English consonants than participants with more variable neural timing. Moreover, this neural metric predicted consonant perception to a greater extent than did age of arrival and length of residence in the UK, and neural jitter predicted independent variance in consonant perception after these demographic variables were accounted for. Thus, difficulties with auditory perception may be one source of problems learning second languages in adulthood.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Idioma , Masculino , Fonética , Adulto Jovem
16.
Neuropsychologia ; 100: 51-63, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28400328

RESUMO

Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task.


Assuntos
Atenção/fisiologia , Comunicação , Lateralidade Funcional/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Fixação Ocular/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
17.
Neuropsychologia ; 89: 217-224, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27329686

RESUMO

The anterior region of the left superior temporal gyrus/superior temporal sulcus (aSTG/STS) has been implicated in two very different cognitive functions: sentence processing and social-emotional processing. However, the vast majority of the sentence stimuli in previous reports have been of a social or social-emotional nature suggesting that sentence processing may be confounded with semantic content. To evaluate this possibility we had subjects read word lists that differed in phrase/constituent size (single words, 3-word phrases, 6-word sentences) and semantic content (social-emotional, social, and inanimate objects) while scanned in a 7T environment. This allowed us to investigate if the aSTG/STS responded to increasing constituent structure (with increased activity as a function of constituent size) with or without regard to a specific domain of concepts, i.e., social and/or social-emotional content. Activity in the left aSTG/STS was found to increase with constituent size. This region was also modulated by content, however, such that social-emotional concepts were preferred over social and object stimuli. Reading also induced content type effects in domain-specific semantic regions. Those preferring social-emotional content included aSTG/STS, inferior frontal gyrus, posterior STS, lateral fusiform, ventromedial prefrontal cortex, and amygdala, regions included in the "social brain", while those preferring object content included parahippocampal gyrus, retrosplenial cortex, and caudate, regions involved in object processing. These results suggest that semantic content affects higher-level linguistic processing and should be taken into account in future studies.


Assuntos
Viés , Emoções/fisiologia , Semântica , Comportamento Social , Lobo Temporal/fisiologia , Adulto , Análise de Variância , Mapeamento Encefálico , Feminino , Frequência Cardíaca/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Leitura , Reconhecimento Psicológico , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
18.
Psychon Bull Rev ; 19(3): 499-504, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22391999

RESUMO

The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semantic changes in language can arise.


Assuntos
Emoções/fisiologia , Idioma , Psicolinguística/métodos , Redação , Comparação Transcultural , Humanos , Testes Psicológicos , Semântica
19.
PLoS One ; 5(7): e11805, 2010 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-20676371

RESUMO

BACKGROUND: According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether 'body-specific' associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences. METHODOLOGY AND PRINCIPAL FINDINGS: We analyzed speech and gesture (3012 spoken clauses, 1747 gestures) from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush) and two left-handers (Obama, McCain). Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates. CONCLUSIONS: Speakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate 'good' with 'right' but not with 'left'; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended) communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co-occurring speech.


Assuntos
Gestos , Fala , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA