RESUMO
OBJECTIVE: Binge drinking is a major health concern, but its cerebral correlates are still largely unexplored. We aimed at exploring (1) the cognitive step at which these deficits appear and (2) the respective influence of global alcohol intake and specific binge-drinking consumption pattern on this deficit. METHODS: On the basis of a screening phase (593 students), 80 participants were selected and distributed in four groups (control non-drinkers, daily drinkers, low and high binge drinkers). Event-related potentials (ERPs) were recorded while performing a simple visual oddball task. RESULTS: Binge drinking was associated with massive ERP impairments, starting at the perceptive level (P100/N100 and N170/P2) and spreading through the attentional (N2b/P3a) and decisional (P3b) ones. Moreover, these deficits were linked with global alcohol intake and also with the specific binge-drinking consumption pattern. CONCLUSIONS: Binge drinkers presented early and global ERP deficits, affecting basic and high-level cognitive stages. Moreover, we showed that binge drinking is deleterious for the brain because of alcohol consumption per se, and also because of its specific consumption pattern. SIGNIFICANCE: The present results show that binge-drinking habits lead to striking brain consequences, particularly because of the repeated alternation between intense intoxications and withdrawal episodes.
Assuntos
Consumo de Bebidas Alcoólicas/patologia , Ondas Encefálicas/efeitos dos fármacos , Depressores do Sistema Nervoso Central/farmacologia , Córtex Cerebral/efeitos dos fármacos , Etanol/farmacologia , Adulto , Consumo de Bebidas Alcoólicas/fisiopatologia , Eletroencefalografia , Potenciais Evocados/efeitos dos fármacos , Feminino , Humanos , Masculino , Escalas de Graduação Psiquiátrica , Tempo de Reação/efeitos dos fármacos , Adulto JovemRESUMO
AIMS: Chronic alcoholism is classically associated with major deficits in the visual and auditory processing of emotions. However, the crossmodal (auditory-visual) processing of emotional stimuli, which occurs most frequently in everyday life, has not yet been explored. The aim of this study was to explore crossmodal processing in alcoholism, and specifically the auditory-visual facilitation effect. METHODS: Twenty patients suffering from alcoholism, and 20 matched healthy controls had to detect the emotion (anger or happiness) displayed by auditory, visual or auditory-visual stimuli. The stimuli were designed to elicit a facilitation effect (namely, faster reaction times (RTs) for crossmodal condition than for unimodal ones). RTs and performance were recorded. RESULTS: While the control subjects elicited a significant facilitation effect, alcoholic individuals did not present this effect, as no significant differences between RTs according to the modality were shown. This lack of facilitation effect is the marker of an impaired auditory-visual processing. CONCLUSIONS: Crossmodal processing of complex social stimuli (such as faces and voices) is crucial for interpersonal relations. This first evidence for a crossmodal deficit in alcoholism contribute in explaining the contrast observed between experimental results describing, up to now, mild impairments in emotional facial expression (EFE) recognition in alcoholic subjects (e.g. Oscar-Berman et al.,1990), and the many clinical observations suggesting massive problems.
Assuntos
Alcoolismo/fisiopatologia , Alcoolismo/psicologia , Ira/fisiologia , Felicidade , Facilitação Social , Estimulação Acústica/métodos , Adulto , Expressão Facial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Projetos Piloto , Tempo de Reação/fisiologiaRESUMO
Ten healthy volunteers took part in this event-related potential (ERP) study aimed at examining the electrophysiological correlates of the cross-modal audio-visual interactions in an identification task. Participants were confronted either to the simultaneous presentation of previously learned faces and voices (audio-visual condition; AV), either to the separate presentation of faces (visual, V) or voices (auditive, A). As expected, an interference effect of audition on vision was observed at a behavioral level, as the bimodal condition was performed more slowly than the visual condition. At the electrophysiological level, the subtraction (AV - (A + V)) gave prominence to three distinct cerebral activities: (1) a central positive/posterior negative wave around 110 ms, (2) a central negative/posterior positive wave around 170 ms, AND (3) a central positive wave around 270 ms. These data suggest that cross-modal cerebral interactions could be independent of behavioral facilitation or interference effects. Moreover, the implication of unimodal and multisensory convergence regions in these results, as suggested by a source localization analysis, is discussed.
Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados/fisiologia , Face , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Análise de Variância , Mapeamento Encefálico , Eletroencefalografia , Feminino , Lateralidade Funcional , Humanos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Fatores de TempoRESUMO
Pictures from the Ekman and Friesen series were used in an event-related potentials study to define the timing of occurrence of gender differences in the processing of positive (happy) and negative (fear) facial expressions. Ten male and 10 female volunteers were confronted with a visual oddball design, in which they had to detect, as quickly as possible, deviant happy or fearful faces amongst a train of standard stimuli (neutral faces). Behavioral results suggest that men and women detected fearful faces more quickly than happy ones. The main result is that the N2b component, functionally considered as an attentional orienting mechanism, was delayed in men for happy stimuli as compared with fearful ones. Gender differences observed in the processing of emotional stimuli could then originate at the attentional level of the information processing system.
Assuntos
Emoções/fisiologia , Potenciais Evocados Visuais/fisiologia , Expressão Facial , Caracteres Sexuais , Adulto , Análise de Variância , Mapeamento Encefálico , Estudos de Casos e Controles , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologiaRESUMO
An ERP study on 9 healthy participants was carried out to temporally constrain the neural network proposed by Campanella et al. (2001) in a PET study investigating the cerebral areas involved in the retrieval of face-name associations. Three learning sessions served to familiarize the participants with 24 face-name associations grouped in 12 male/female couples. During EEG recording, participants were confronted with four experimental conditions, requiring the retrieval of previously learned couples on the basis of the presentation of name-name (NN), face-face (FF), name-face (NF), or face-name (FN) pairs of stimuli. The main analysis of this experiment consisted in the subtraction of the nonmixed conditions (NN and FF) from the mixed conditions (NF and FN). It revealed two main ERP components: a negative wave peaking at left parieto-occipital sites around 285 ms and its positive counterpart recorded at left centro-frontal electrodes around 300 ms. Moreover, a dipole modeling using three dipoles whose localization corresponded to the three cerebral areas observed in the PET study (left inferior frontal gyrus, left medial frontal gyrus, left inferior parietal lobe) explained more than 90% of the variance of the results. The complementarity between anatomical and neurophysiological techniques allowed us to discuss the temporal course of these cerebral activities and to propose an interactive and original anatomo-temporal model of the retrieval of face-name associations.
Assuntos
Eletroencefalografia , Memória/fisiologia , Adulto , Potenciais Evocados , Face , Feminino , Humanos , Masculino , Estimulação Luminosa , Percepção SocialRESUMO
A PET study of seven normal individuals was carried out to investigate the neural populations involved in the retrieval of the visual representation of a face when presented with an associated name, and conversely. Face-name associations were studied by means of four experimental matching conditions, including the retrieval of previously learned (1) name-name (NN), (2) face-face (FF), (3) name-face (NF), and (4) face-name (FN) associations, as well as a resting scan with eyes closed. Before PET images acquisition, subjects were presented with 24 unknown face-name associations to encode in 12 male/female couples. During PET scanning, their task was to decide whether the presented pair was a previously learned association. The right fusiform gyrus was strongly activated in FF condition as compared to NN and Rest conditions. However, no specific activations were found for NN condition relative to FF condition. A network of three areas distributed in the left hemisphere, both active in (NF-FF) and (FN-NN) comparisons, was interpreted as the locus of the integration of visual faces and names representations. These three regions were localized in the inferior frontal gyrus (BA 45), the medial frontal gyrus (BA 6) and the supramarginal gyrus of the inferior parietal lobe (BA 40). An interactive model accounting for these results, with BA 40 seen as an amodal binding region, is proposed.