Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Behav Res Methods ; 56(4): 3737-3756, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38459221

RESUMO

Timing and rhythm abilities are complex and multidimensional skills that are highly widespread in the general population. This complexity can be partly captured by the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery, consisting of four perceptual and five sensorimotor tests (finger-tapping), has been used in healthy adults and in clinical populations (e.g., Parkinson's disease, ADHD, developmental dyslexia, stuttering), and shows sensitivity to individual differences and impairment. However, major limitations for the generalized use of this tool are the lack of reliable and standardized norms and of a version of the battery that can be used outside the lab. To circumvent these caveats, we put forward a new version of BAASTA on a tablet device capable of ensuring lab-equivalent measurements of timing and rhythm abilities. We present normative data obtained with this version of BAASTA from over 100 healthy adults between the ages of 18 and 87 years in a test-retest protocol. Moreover, we propose a new composite score to summarize beat-based rhythm capacities, the Beat Tracking Index (BTI), with close to excellent test-retest reliability. BTI derives from two BAASTA tests (beat alignment, paced tapping), and offers a swift and practical way of measuring rhythmic abilities when research imposes strong time constraints. This mobile BAASTA implementation is more inclusive and far-reaching, while opening new possibilities for reliable remote testing of rhythmic abilities by leveraging accessible and cost-efficient technologies.


Assuntos
Percepção Auditiva , Humanos , Adulto , Masculino , Pessoa de Meia-Idade , Feminino , Idoso , Adulto Jovem , Percepção Auditiva/fisiologia , Adolescente , Reprodutibilidade dos Testes , Idoso de 80 Anos ou mais , Desempenho Psicomotor/fisiologia , Percepção do Tempo/fisiologia , Aplicativos Móveis
2.
Cortex ; 149: 148-164, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35231722

RESUMO

When we hear an emotional voice, does this alter how the brain perceives and evaluates a subsequent face? Here, we tested this question by comparing event-related potentials evoked by angry, sad, and happy faces following vocal expressions which varied in form (speech-embedded emotions, non-linguistic vocalizations) and emotional relationship (congruent, incongruent). Participants judged whether face targets were true exemplars of emotion (facial affect decision). Prototypicality decisions were more accurate and faster for congruent vs. incongruent faces and for targets that displayed happiness. Principal component analysis identified vocal context effects on faces in three distinct temporal factors: a posterior P200 (150-250 ms), associated with evaluating face typicality; a slow frontal negativity (200-750 ms) evoked by angry faces, reflecting enhanced attention to threatening targets; and the Late Positive Potential (LPP, 450-1000 ms), reflecting sustained contextual evaluation of intrinsic face meaning (with independent LPP responses in posterior and prefrontal cortex). Incongruent faces and faces primed by speech (compared to vocalizations) tended to increase demands on face perception at stages of structure-building (P200) and meaning integration (posterior LPP). The frontal LPP spatially overlapped with the earlier frontal negativity response; these components were functionally linked to expectancy-based processes directed towards the incoming face, governed by the form of a preceding vocal expression (especially for anger). Our results showcase differences in how vocalizations and speech-embedded emotion expressions modulate cortical operations for predicting (prefrontal) versus integrating (posterior) face meaning in light of contextual details.


Assuntos
Expressão Facial , Reconhecimento Facial , Eletroencefalografia/métodos , Emoções/fisiologia , Potenciais Evocados/fisiologia , Reconhecimento Facial/fisiologia , Humanos
3.
Biol Psychol ; 163: 108135, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34126165

RESUMO

Timing abilities help organizing the temporal structure of events but are known to change systematically with age. Yet, how the neuronal signature of temporal predictability changes across the age span remains unclear. Younger (n = 21; 23.1 years) and older adults (n = 21; 68.5 years) performed an auditory oddball task, consisting of isochronous and random sound sequences. Results confirm an altered P50 response in the older compared to younger participants. P50 amplitudes differed between the isochronous and random temporal structures in younger, and for P200 in the older group. These results suggest less efficient sensory gating in older adults in both isochronous and random auditory sequences. N100 amplitudes were more negative for deviant tones. P300 amplitudes were parietally enhanced in younger, but not in older adults. In younger participants, the P50 results confirm that this component marks temporal predictability, indicating sensitive gating of temporally regular sound sequences.


Assuntos
Eletroencefalografia , Potenciais Evocados Auditivos , Estimulação Acústica , Idoso , Envelhecimento , Percepção Auditiva , Humanos , Tempo de Reação , Filtro Sensorial
4.
J Cross Cult Psychol ; 52(3): 275-294, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33958813

RESUMO

Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual's cultural background. We explored this issue in 25 Chinese participants by examining how listening to emotional prosody in Mandarin influenced participants' gazes at emotional faces in a modified visual search task. We also conducted a cross-cultural comparison between data of this study and that of our previous work in English-speaking Canadians using analogous methodology. In both studies, eye movements were recorded as participants scanned an array of four faces portraying fear, anger, happy, and neutral expressions, while passively listening to a pseudo-utterance expressing one of the four emotions (Mandarin utterance in this study; English utterance in our previous study). The frequency and duration of fixations to each face were analyzed during 5 seconds after the onset of faces, both during the presence of the speech (early time window) and after the utterance ended (late time window). During the late window, Chinese participants looked more frequently and longer at faces conveying congruent emotions as the speech, consistent with findings from English-speaking Canadians. Cross-cultural comparison further showed that Chinese, but not Canadians, looked more frequently and longer at angry faces, which may signal potential conflicts and social threats. We hypothesize that the socio-cultural norms related to harmony maintenance in the Eastern culture promoted Chinese participants' heightened sensitivity to, and deeper processing of, angry cues, highlighting culture-specific patterns in how individuals scan their social environment during emotion processing.

5.
Biol Psychol ; 154: 107909, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32454081

RESUMO

Speakers modulate their voice (prosody) to communicate non-literal meanings, such as sexual innuendo (She inspected his package this morning, where "package" could refer to a man's penis). Here, we analyzed event-related potentials to illuminate how listeners use prosody to interpret sexual innuendo and what neurocognitive processes are involved. Participants listened to third-party statements with literal or 'sexual' interpretations, uttered in an unmarked or sexually evocative tone. Analyses revealed: 1) rapid neural differentiation of neutral vs. sexual prosody from utterance onset; (2) N400-like response differentiating contextually constrained vs. unconstrained utterances following the critical word (reflecting integration of prosody and word meaning); and (3) a selective increased negativity response to sexual innuendo around 600 ms after the critical word. Findings show that the brain quickly integrates prosodic and lexical-semantic information to form an impression of what the speaker is communicating, triggering a unique response to sexual innuendos, consistent with their high social relevance.


Assuntos
Comportamento Sexual , Percepção da Fala/fisiologia , Fala , Encéfalo/fisiologia , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Voz/fisiologia
6.
J Neurodev Disord ; 10(1): 4, 2018 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-29378522

RESUMO

BACKGROUND: Fragile X syndrome (FXS) is a neurodevelopmental genetic disorder causing cognitive and behavioural deficits. Repetition suppression (RS), a learning phenomenon in which stimulus repetitions result in diminished brain activity, has been found to be impaired in FXS. Alterations in RS have been associated with behavioural problems in FXS; however, relations between RS and intellectual functioning have not yet been elucidated. METHODS: EEG was recorded in 14 FXS participants and 25 neurotypical controls during an auditory habituation paradigm using repeatedly presented pseudowords. Non-phased locked signal energy was compared across presentations and between groups using linear mixed models (LMMs) in order to investigate RS effects across repetitions and brain areas and a possible relation to non-verbal IQ (NVIQ) in FXS. In addition, we explored group differences according to NVIQ and we probed the feasibility of training a support vector machine to predict cognitive functioning levels across FXS participants based on single-trial RS features. RESULTS: LMM analyses showed that repetition effects differ between groups (FXS vs. controls) as well as with respect to NVIQ in FXS. When exploring group differences in RS patterns, we found that neurotypical controls revealed the expected pattern of RS between the first and second presentations of a pseudoword. More importantly, while FXS participants in the ≤ 42 NVIQ group showed no RS, the > 42 NVIQ group showed a delayed RS response after several presentations. Concordantly, single-trial estimates of repetition effects over the first four repetitions provided the highest decoding accuracies in the classification between the FXS participant groups. CONCLUSION: Electrophysiological measures of repetition effects provide a non-invasive and unbiased measure of brain responses sensitive to cognitive functioning levels, which may be useful for clinical trials in FXS.


Assuntos
Adaptação Fisiológica , Percepção Auditiva/fisiologia , Encéfalo/fisiopatologia , Cognição , Síndrome do Cromossomo X Frágil/fisiopatologia , Síndrome do Cromossomo X Frágil/psicologia , Estimulação Acústica , Adolescente , Adulto , Criança , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Inteligência , Testes de Inteligência , Aprendizado de Máquina , Masculino , Adulto Jovem
7.
Neuropsychologia ; 103: 96-105, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28720526

RESUMO

Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing.


Assuntos
Percepção Auditiva/fisiologia , Ondas Encefálicas/fisiologia , Encéfalo/fisiologia , Emoções/fisiologia , Música , Fala , Estimulação Acústica , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Música/psicologia , Testes Neuropsicológicos , Prática Psicológica , Competência Profissional , Processamento de Sinais Assistido por Computador , Adulto Jovem
8.
Int J Dev Neurosci ; 59: 52-59, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28330777

RESUMO

Fragile X Syndrome (FXS) is a neurodevelopmental genetic disorder associated with cognitive and behavioural deficits. In particular, neuronal habituation processes have been shown to be altered in FXS patients. Yet, while such deficits have been primarily explored using auditory stimuli, less is known in the visual modality. Here, we investigated the putative alteration of repetition suppression using faces in FXS patients compared to controls that had the same age distribution. Electroencephalographic (EEG) signals were acquired while participants were presented with 18 different faces, each repeated ten times successively. The repetition suppression effect was probed by comparing the brain responses to the first and second presentation, based on task-evoked event-related potentials (ERP) as well as on task-induced oscillatory activity. We found different patterns of habituation for controls and patients both in ERP and oscillatory power. While the N170 was not affected by face repetition in controls, it was altered in FXS patients. Conversely, while a repetition suppression effect was observed in the theta band (4-8Hz) over frontal and parieto-occipital areas in controls, it was not seen in FXS patients. These results provide the first evidence for diminished ERP and oscillatory habituation effects in response to face repetitions in FXS. These findings extend previous observations of impairments in learning mechanisms and may be linked to deficits in the maturation processes of synapses caused by the mutation. The present study contributes to bridging the gap between animal models of synaptic plasticity dysfunctions and human research in FXS.


Assuntos
Córtex Cerebral/fisiopatologia , Potenciais Evocados Visuais/fisiologia , Síndrome do Cromossomo X Frágil/complicações , Transtornos da Percepção/etiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Análise de Variância , Criança , Eletroencefalografia , Feminino , Análise de Fourier , Habituação Psicofisiológica/fisiologia , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
9.
Soc Neurosci ; 12(6): 685-700, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-27588442

RESUMO

To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.


Assuntos
Encéfalo/fisiologia , Cultura , Emigrantes e Imigrantes/psicologia , Emoções/fisiologia , Reconhecimento Facial/fisiologia , Percepção da Fala/fisiologia , Adulto , Canadá , China/etnologia , Potenciais Evocados , Feminino , Humanos , Masculino , Tempo de Reação , Teste de Stroop , Adulto Jovem
10.
Eur J Neurosci ; 44(10): 2786-2794, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27600697

RESUMO

There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.


Assuntos
Adaptação Fisiológica , Potenciais Evocados , Música , Percepção da Fala , Adulto , Encéfalo/fisiologia , Feminino , Humanos , Masculino
11.
Front Hum Neurosci ; 9: 311, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26074808

RESUMO

Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an Oddball task. Participants viewed in-group emotional faces, with or without simultaneous vocal expressions, while performing a face-irrelevant visual task as the EEG was recorded. A significantly larger visual Mismatch Negativity (vMMN) was observed for Chinese vs. English participants when faces were accompanied by voices, suggesting that Chinese were influenced to a larger extent by task-irrelevant vocal cues. These data highlight further differences in how adults from East Asian vs. Western cultures process socio-emotional cues, arguing that distinct cultural practices in communication (e.g., display rules) shape neurocognitive activity associated with the early perception and integration of multi-sensory emotional cues.

12.
Neuropsychologia ; 67: 1-13, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25477081

RESUMO

To understand how culture modulates on-line neural responses to social information, this study compared how individuals from two distinct cultural groups, English-speaking North Americans and Chinese, process emotional meanings of multi-sensory stimuli as indexed by both behaviour (accuracy) and event-related potential (N400) measures. In an emotional Stroop-like task, participants were presented face-voice pairs expressing congruent or incongruent emotions in conditions where they judged the emotion of one modality while ignoring the other (face or voice focus task). Results indicated that while both groups were sensitive to emotional differences between channels (with lower accuracy and higher N400 amplitudes for incongruent face-voice pairs), there were marked group differences in how intruding facial or vocal cues affected accuracy and N400 amplitudes, with English participants showing greater interference from irrelevant faces than Chinese. Our data illuminate distinct biases in how adults from East Asian versus Western cultures process socio-emotional cues, supplying new evidence that cultural learning modulates not only behaviour, but the neurocognitive response to different features of multi-channel emotion expressions.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Percepção Social , Adulto , China , Comparação Transcultural , Potenciais Evocados , Expressão Facial , Feminino , Humanos , Masculino , Percepção da Fala , Estados Unidos , Adulto Jovem
13.
Brain Res ; 1565: 48-62, 2014 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-24751571

RESUMO

During social interactions, listeners weigh the importance of linguistic and extra-linguistic speech cues (prosody) to infer the true intentions of the speaker in reference to what is actually said. In this study, we investigated what brain processes allow listeners to detect when a spoken compliment is meant to be sincere (true compliment) or not ("white lie"). Electroencephalograms of 29 participants were recorded while they listened to Question-Response pairs, where the response was expressed in either a sincere or insincere tone (e.g., "So, what did you think of my presentation?"/"I found it really interesting."). Participants judged whether the response was sincere or not. Behavioral results showed that prosody could be effectively used to discern the intended sincerity of compliments. Analysis of temporal and spatial characteristics of event-related potentials (P200, N400, P600) uncovered significant effects of prosody on P600 amplitudes, which were greater in response to sincere versus insincere compliments. Using low resolution brain electromagnetic tomography (LORETA), we determined that the anatomical sources of this activity were likely located in the (left) insula, consistent with previous reports of insular activity in the perception of lies and concealments. These data extend knowledge of the neurocognitive mechanisms that permit context-appropriate inferences about speaker feelings and intentions during interpersonal communication.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Potenciais Evocados Auditivos , Julgamento/fisiologia , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
14.
Front Psychol ; 4: 367, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23805115

RESUMO

Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400-1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.

15.
Neuropsychologia ; 50(12): 2887-2896, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22944003

RESUMO

Emotional facial expressions (EFE) are efficiently processed when both attention and gaze are focused on them. However, what kind of processing persists when EFE are neither the target of attention nor of gaze remains largely unknown. Consequently, in this experiment we investigated whether the implicit processing of faces displayed in far periphery could still be modulated by their emotional expression. Happy, fearful and neutral faces appeared randomly for 300 ms at four peripheral locations of a panoramic screen (15 and 30° in the right and left visual fields). Reaction times and electrophysiological responses were recorded from 32 participants who had to categorize these faces according to their gender. A decrease of behavioral performance was specifically found for happy and fearful faces, probably because emotional content was automatically processed and interfered with information necessary to the task. A spatio-temporal principal component analysis of electrophysiological data confirmed an enhancement of early activity in occipito-temporal areas for emotional faces in comparison with neutral ones. Overall, these data show an implicit processing of EFE despite the strong decrease of visual performance with eccentricity. Therefore, the present research suggests that EFE could be automatically detected in peripheral vision, confirming the abilities of humans to process emotional saliency in very impoverished conditions of vision.


Assuntos
Córtex Cerebral/fisiologia , Emoções , Potenciais Evocados/fisiologia , Expressão Facial , Reconhecimento Visual de Modelos/fisiologia , Campos Visuais/fisiologia , Adolescente , Atenção/fisiologia , Mapeamento Encefálico , Medo , Feminino , Fixação Ocular , Felicidade , Humanos , Lobo Occipital/fisiologia , Tempo de Reação , Lobo Temporal/fisiologia , Adulto Jovem
16.
Brain Res ; 1460: 50-62, 2012 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-22592075

RESUMO

Behavioural studies have used spatial cueing designs extensively to investigate emotional biases in individuals exhibiting clinical and sub-clinical anxiety. However, the neural processes underlying the generation of these biases remain largely unknown. In this study, people who scored unusually high or low on scales of social anxiety performed a spatial cueing task. They were asked to discriminate the orientation of arrows appearing at the location previously occupied by a lateralised cue (consisting of a face displaying an emotional or a neutral expression) or at the empty location. The results showed that the perceptual encoding of faces, indexed by P1, and mobilisation of attentional resources, reflected in P2 on occipital locations, were modulated by social anxiety. These modulations were directly linked to the social anxiety level but not to trait anxiety. By contrast, later cognitive stages and behavioural performances were not modulated by social anxiety, supporting the theory of dissociation between efficiency and effectiveness in anxiety.


Assuntos
Ansiedade/fisiopatologia , Atenção/fisiologia , Emoções/fisiologia , Expressão Facial , Comportamento Social , Adolescente , Feminino , Humanos , Masculino , Adulto Jovem
17.
PLoS One ; 7(1): e30740, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22303454

RESUMO

Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0-1250 ms], [1250-2500 ms], [2500-5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions.


Assuntos
Atenção/fisiologia , Orelha/fisiologia , Emoções/fisiologia , Percepção Visual/fisiologia , Comportamento/fisiologia , Face , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Rememoração Mental/fisiologia , Estimulação Física , Movimentos Sacádicos/fisiologia , Fatores de Tempo , Adulto Jovem
18.
Neuropsychologia ; 49(7): 2013-21, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21453712

RESUMO

Many studies provided evidence that the emotional content of visual stimulations modulates behavioral performance and neuronal activity. Surprisingly, these studies were carried out using stimulations presented in the center of the visual field while the majority of visual events firstly appear in the peripheral visual field. In this study, we assessed the impact of the emotional facial expression of fear when projected in near and far periphery. Sixteen participants were asked to categorize fearful and neutral faces projected at four peripheral visual locations (15° and 30° of eccentricity in right and left sides of the visual field) while reaction times and event-related potentials (ERPs) were recorded. ERPs were analyzed by means of spatio-temporal principal component and baseline-to-peak methods. Behavioral data confirmed the decrease of performance with eccentricity and showed that fearful faces induced shorter reaction times than neutral ones. Electrophysiological data revealed that the spatial position and the emotional content of faces modulated ERPs components. In particular, the amplitude of N170 was enhanced by fearful facial expression. These findings shed light on how visual eccentricity modulates the processing of emotional faces and suggest that, despite impoverished visual conditions, the preferential neural coding of fearful expression of faces still persists in far peripheral vision. The emotional content of faces could therefore contribute to their foveal or attentional capture, like in social interactions.


Assuntos
Face , Expressão Facial , Medo/psicologia , Percepção Visual/fisiologia , Adolescente , Análise de Variância , Cor , Interpretação Estatística de Dados , Eletroencefalografia , Emoções/fisiologia , Potenciais Evocados/fisiologia , Feminino , Humanos , Estimulação Luminosa , Análise de Componente Principal , Tempo de Reação/fisiologia , Campos Visuais/fisiologia , Adulto Jovem
19.
Front Hum Neurosci ; 4: 33, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20428514

RESUMO

Current research in affective neuroscience suggests that the emotional content of visual stimuli activates brain-body responses that could be critical to general health and physical disease. The aim of this study was to develop an integrated neurophysiological approach linking central and peripheral markers of nervous activity during the presentation of natural scenes in order to determine the temporal stages of brain processing related to the bodily impact of emotions. More specifically, whole head magnetoencephalogram (MEG) data and skin conductance response (SCR), a reliable autonomic marker of central activation, were recorded in healthy volunteers during the presentation of emotional (unpleasant and pleasant) and neutral pictures selected from the International Affective Picture System (IAPS). Analyses of event-related magnetic fields (ERFs) revealed greater activity at 180 ms in an occipitotemporal component for emotional pictures than for neutral counterparts. More importantly, these early effects of emotional arousal on cerebral activity were significantly correlated with later increases in SCR magnitude. For the first time, a neuromagnetic cortical component linked to a well-documented marker of bodily arousal expression of emotion, namely, the SCR, was identified and located. This finding sheds light on the time course of the brain-body interaction with emotional arousal and provides new insights into the neural bases of complex and reciprocal mind-body links.

20.
Brain Topogr ; 20(4): 216-23, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18335307

RESUMO

Recent findings from event-related potentials (ERPs) studies provided strong evidence that centrally presented emotional pictures could be used to assess affective processing. Moreover, several studies showed that emotionally charged stimuli may automatically attract attention even if these are not consciously identified. Indeed, such perceptive conditions can be compared to those typical of the peripheral vision, particularly known to have low spatial resolution capacities. The aim of the present study was to characterize at behavioral and neural levels the impact of emotional visual scenes presented in peripheral vision. Eighteen participants were asked to categorize neutral and unpleasant pictures presented at central (0 degrees ) and peripheral eccentricities (-30 and +30 degrees ) while ERPs were recorded from 63 electrodes. ERPs were analysed by means of spatio-temporal principal component analyses (PCA) in order to evaluate influences of the emotional content on ERP components for each spatial position (central vs. peripheral). Main results highlight that affective modulation of early ERP components exists for both centrally and peripherally presented pictures. These findings suggest that, for far peripheral eccentricities as for central vision, the brain engages specific resources to process emotional information.


Assuntos
Atenção/fisiologia , Emoções/fisiologia , Potenciais Evocados Visuais/fisiologia , Percepção Espacial/fisiologia , Campos Visuais/fisiologia , Adolescente , Adulto , Análise de Variância , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Estimulação Luminosa/métodos , Psicofísica/métodos , Tempo de Reação/fisiologia , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA