Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Brain Topogr ; 28(3): 423-36, 2015 May.
Article in English | MEDLINE | ID: mdl-24531985

ABSTRACT

Attention improves the processing of specific information while other stimuli are disregarded. A good balance between bottom-up (attentional capture by unexpected salient stimuli) and top-down (selection of relevant information) mechanisms is crucial to be both task-efficient and aware of our environment. Only few studies have explored how an isolated unexpected task-irrelevant stimulus outside the attention focus can disturb the top-down attention mechanisms necessary to the good performance of the ongoing task, and how these top-down mechanisms can modulate the bottom-up mechanisms of attentional capture triggered by an unexpected event. We recorded scalp electroencephalography in 18 young adults performing a new paradigm measuring distractibility and assessing both bottom-up and top-down attention mechanisms, at the same time. Increasing task load in top-down attention was found to reduce early processing of the distracting sound, but not bottom-up attentional capture mechanisms nor the behavioral distraction cost in reaction time. Moreover, the impact of bottom-up attentional capture by distracting sounds on target processing was revealed as a delayed latency of the N100 sensory response to target sounds mirroring increased reaction times. These results provide crucial information into how bottom-up and top-down mechanisms dynamically interact and compete in the human brain, i.e. on the precarious balance between voluntary attention and distraction.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Brain/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Male , Neuropsychological Tests , Reaction Time , Young Adult
2.
Neuroimage ; 94: 172-184, 2014 Jul 01.
Article in English | MEDLINE | ID: mdl-24636881

ABSTRACT

Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.


Subject(s)
Auditory Cortex/physiopathology , Deafness/physiopathology , Form Perception , Motion Perception , Nerve Net/physiopathology , Neuronal Plasticity , Recruitment, Neurophysiological , Adult , Disease Progression , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Reaction Time , Young Adult
3.
Brain Topogr ; 27(4): 428-37, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24166202

ABSTRACT

MMN oddball paradigms are frequently used to assess auditory (dys)functions in clinical populations, or the influence of various factors (such as drugs and alcohol) on auditory processing. A widely used procedure is to compare the MMN responses between two groups of subjects (e.g. patients vs controls), or between experimental conditions in the same group. To correctly interpret these comparisons, it is important to take into account the multiple brain generators that produce the MMN response. To disentangle the different components of the MMN, we describe the advantages of scalp current density (SCD)-or surface Laplacian-computation for ERP analysis. We provide a short conceptual and mathematical description of SCDs, describe their properties, and illustrate with examples from published studies how they can benefit MMN analysis. We conclude with practical tips on how to correctly use and interpret SCDs in this context.


Subject(s)
Brain Mapping/methods , Brain/physiology , Evoked Potentials, Auditory , Scalp/innervation , Humans
4.
J Cogn Neurosci ; 25(3): 365-73, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23190327

ABSTRACT

Neural representation of auditory regularities can be probed using the MMN, a component of ERPs generated in the auditory cortex by any violation of that regularity. Although several studies have shown that visual information can influence or even trigger an MMN by altering an acoustic regularity, it is not known whether audiovisual regularities are encoded in the auditory representation supporting MMN generation. We compared the MMNs elicited by the auditory violation of (a) an auditory regularity (a succession of identical standard sounds), (b) an audiovisual regularity (a succession of identical audiovisual stimuli), and (c) an auditory regularity accompanied by variable visual stimuli. In all three conditions, the physical difference between the standard and the deviant sound was identical. We found that the MMN triggered by the same auditory deviance was larger for audiovisual regularities than for auditory-only regularities or for auditory regularities paired with variable visual stimuli, suggesting that the visual regularity influenced the representation of the auditory regularity. This result provides evidence for the encoding of audiovisual regularities in the human brain.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Reaction Time/physiology , Visual Perception/physiology , Adult , Electroencephalography/instrumentation , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Male , Young Adult
5.
Neuropsychologia ; 50(5): 979-87, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22349441

ABSTRACT

Automatic stimulus-change detection is usually investigated in the auditory modality by studying Mismatch Negativity (MMN). Although the change-detection process occurs in all sensory modalities, little is known about visual deviance detection, particularly regarding the development of this brain function throughout childhood. The aim of the present study was to examine the maturation of the electrophysiological response to unattended deviant visual stimuli in 11-year-old children. Twelve children and 12 adults were presented with a passive visual oddball paradigm using dynamic stimuli involving changes in form and motion. Visual Mismatch responses were identified over occipito-parietal sites in both groups but they displayed several differences. In adults the response clearly culminated at around 210 ms whereas in children three successive negative deflections were evidenced between 150 and 330 ms. Moreover, the main mismatch response in children was characterized by a positive component peaking over occipito-parieto-temporal regions around 450 ms after deviant stimulus onset. The findings showed that the organization of the vMMN response is not mature in 11-year-old children and that a longer time is still necessary to process simple visual deviancy at this late stage of child development.


Subject(s)
Brain Mapping , Child Development/physiology , Evoked Potentials, Visual/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adolescent , Adult , Age Factors , Analysis of Variance , Child , Contingent Negative Variation/physiology , Electroencephalography , Female , Humans , Male , Photic Stimulation/methods , Reaction Time/physiology , Time Factors , Young Adult
6.
PLoS One ; 6(9): e25607, 2011.
Article in English | MEDLINE | ID: mdl-21980501

ABSTRACT

Individuals with profound deafness rely critically on vision to interact with their environment. Improvement of visual performance as a consequence of auditory deprivation is assumed to result from cross-modal changes occurring in late stages of visual processing. Here we measured reaction times and event-related potentials (ERPs) in profoundly deaf adults and hearing controls during a speeded visual detection task, to assess to what extent the enhanced reactivity of deaf individuals could reflect plastic changes in the early cortical processing of the stimulus. We found that deaf subjects were faster than hearing controls at detecting the visual targets, regardless of their location in the visual field (peripheral or peri-foveal). This behavioural facilitation was associated with ERP changes starting from the first detectable response in the striate cortex (C1 component) at about 80 ms after stimulus onset, and in the P1 complex (100-150 ms). In addition, we found that P1 peak amplitudes predicted the response times in deaf subjects, whereas in hearing individuals visual reactivity and ERP amplitudes correlated only at later stages of processing. These findings show that long-term auditory deprivation can profoundly alter visual processing from the earliest cortical stages. Furthermore, our results provide the first evidence of a co-variation between modified brain activity (cortical plasticity) and behavioural enhancement in this sensory-deprived population.


Subject(s)
Deafness/physiopathology , Visual Cortex/physiology , Adolescent , Adult , Behavior/physiology , Evoked Potentials, Visual , Female , Hearing/physiology , Humans , Male , Middle Aged , Photic Stimulation , Time Factors , Young Adult
7.
Brain Res ; 1396: 35-44, 2011 Jun 17.
Article in English | MEDLINE | ID: mdl-21558041

ABSTRACT

Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient.


Subject(s)
Auditory Perception/physiology , Sensory Thresholds/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Cues , Female , Humans , Male , Photic Stimulation/methods , Young Adult
8.
Hear Res ; 258(1-2): 143-51, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19573583

ABSTRACT

In this review, we examine the contribution of human electrophysiological studies (EEG, sEEG and MEG) to the study of visual influence on processing in the auditory cortex. Focusing mainly on studies performed by our group, we critically review the evidence showing (1) that visual information can both activate and modulate the activity of the auditory cortex at relatively early stages (mainly at the processing stage of the auditory N1 wave) in response to both speech and non-speech sounds and (2) that visual information can be included in the representation of both speech and non-speech sounds in auditory sensory memory. We describe an important conceptual tool in the study of audiovisual interaction (the additive model) and show the importance of considering the spatial distribution of electrophysiological data when interpreting EEG results. Review of these studies points to the probable role of sensory, attentional and task-related factors in modulating audiovisual interactions in the auditory cortex.


Subject(s)
Auditory Cortex/anatomy & histology , Electroencephalography/methods , Magnetoencephalography/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping/methods , Electrophysiology/methods , Hearing , Humans , Models, Neurological , Perception , Sound , Speech , Time Factors , Vision, Ocular , Visual Perception/physiology
9.
J Neurosci ; 28(52): 14301-10, 2008 Dec 24.
Article in English | MEDLINE | ID: mdl-19109511

ABSTRACT

Hemodynamic studies have shown that the auditory cortex can be activated by visual lip movements and is a site of interactions between auditory and visual speech processing. However, they provide no information about the chronology and mechanisms of these cross-modal processes. We recorded intracranial event-related potentials to auditory, visual, and bimodal speech syllables from depth electrodes implanted in the temporal lobe of 10 epileptic patients (altogether 932 contacts). We found that lip movements activate secondary auditory areas, very shortly (approximately equal to 10 ms) after the activation of the visual motion area MT/V5. After this putatively feedforward visual activation of the auditory cortex, audiovisual interactions took place in the secondary auditory cortex, from 30 ms after sound onset and before any activity in the polymodal areas. Audiovisual interactions in the auditory cortex, as estimated in a linear model, consisted both of a total suppression of the visual response to lipreading and a decrease of the auditory responses to the speech sound in the bimodal condition compared with unimodal conditions. These findings demonstrate that audiovisual speech integration does not respect the classical hierarchy from sensory-specific to associative cortical areas, but rather engages multiple cross-modal mechanisms at the first stages of nonprimary auditory cortex activation.


Subject(s)
Auditory Cortex/physiopathology , Brain Mapping , Epilepsies, Partial/pathology , Epilepsies, Partial/physiopathology , Evoked Potentials, Auditory/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Reaction Time/physiology , Time Factors , Young Adult
10.
J Cogn Neurosci ; 20(1): 49-64, 2008 Jan.
Article in English | MEDLINE | ID: mdl-17919079

ABSTRACT

Abstract Timbre characterizes the identity of a sound source. On psychoacoustic grounds, it has been described as a multidimensional perceptual attribute of complex sounds. Using Garner's interference paradigm, we found in a previous behavioral study that three timbral dimensions exhibited interactive processing. These timbral dimensions acoustically corresponded to attack time, spectral centroid, and spectrum fine structure. Here, using event-related potentials (ERPs), we sought neurophysiological correlates of the interactive processing of these dimensions of timbre. ERPs allowed us to dissociate several levels of interaction, at both early perceptual and late stimulus identification stages of processing. The cost of filtering out an irrelevant timbral dimension was accompanied by a late negative-going activity, whereas congruency effects between timbre dimensions were associated with interactions in both early sensory and late processing stages. ERPs also helped to determine the similarities and differences in the interactions displayed by the different pairs of timbre dimensions, revealing in particular variations in the latencies at which temporal and spectral timbre dimensions can interfere with the processing of another spectral timbre dimension.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Evoked Potentials, Auditory/physiology , Mental Processes/physiology , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Analysis of Variance , Auditory Pathways/physiology , Contingent Negative Variation , Female , Humans , Male , Models, Neurological , Reaction Time/physiology , Reference Values , Sound Spectrography , Statistics, Nonparametric
11.
J Neurosci ; 27(35): 9252-61, 2007 Aug 29.
Article in English | MEDLINE | ID: mdl-17728439

ABSTRACT

In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.


Subject(s)
Attention/physiology , Auditory Cortex/physiopathology , Brain Mapping , Evoked Potentials, Auditory/physiology , Sound , Acoustic Stimulation/methods , Adult , Dose-Response Relationship, Radiation , Electroencephalography/methods , Epilepsy/pathology , Female , Functional Laterality , Humans , Male , Middle Aged , Statistics, Nonparametric , Time Factors
12.
J Neurosci ; 27(29): 7838-46, 2007 Jul 18.
Article in English | MEDLINE | ID: mdl-17634377

ABSTRACT

Deprivation from normal sensory input has been shown to alter tonotopic organization of the human auditory cortex. In this context, cochlear implant subjects provide an interesting model in that profound deafness is made partially reversible by the cochlear implant. In restoring afferent activity, cochlear implantation may also reverse some of the central changes related to deafness. The purpose of the present study was to address whether the auditory cortex of cochlear implant subjects is tonotopically organized. The subjects were thirteen adults with at least 3 months of cochlear implant experience. Auditory event-related potentials were recorded in response to electrical stimulation delivered at different intracochlear electrodes. Topographic analysis of the auditory N1 component (approximately 85 ms latency) showed that the locations on the scalp and the relative amplitudes of the positive/negative extrema differ according to the stimulated electrode, suggesting that distinct sets of neural sources are activated. Dipole modeling confirmed electrode-dependent orientations of these sources in temporal areas, which can be explained by nearby, but distinct sites of activation in the auditory cortex. Although the cortical organization in cochlear implant users is similar to the tonotopy found in normal-hearing subjects, some differences exist. Nevertheless, a correlation was found between the N1 peak amplitude indexing cortical tonotopy and the values given by the subjects for a pitch scaling task. Hence, the pattern of N1 variation likely reflects how frequencies are coded in the brain.


Subject(s)
Auditory Cortex/physiopathology , Auditory Perception/physiology , Brain Mapping , Cochlear Implantation , Deafness/physiopathology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Aged , Analysis of Variance , Auditory Cortex/radiation effects , Deafness/pathology , Deafness/surgery , Electric Stimulation/methods , Electrodes , Electroencephalography/methods , Evoked Potentials, Auditory/radiation effects , Female , Functional Laterality , Humans , Male , Middle Aged , Reaction Time/physiology
13.
Brain Res ; 1138: 159-70, 2007 Mar 23.
Article in English | MEDLINE | ID: mdl-17261274

ABSTRACT

Timbre characterizes the identity of a sound source. Psychoacoustic studies have revealed that timbre is a multidimensional perceptual attribute with multiple underlying acoustic dimensions of both temporal and spectral types. Here we investigated the relations among the processing of three major timbre dimensions characterized acoustically by attack time, spectral centroid, and spectrum fine structure. All three pairs of these dimensions exhibited Garner interference: speeded categorization along one timbre dimension was affected by task-irrelevant variations along another timbre dimension. We also observed congruency effects: certain pairings of values along two different dimensions were categorized more rapidly than others. The exact profile of interactions varied across the three pairs of dimensions tested. The results are interpreted within the frame of a model postulating separate channels of processing for auditory attributes (pitch, loudness, timbre dimensions, etc.) with crosstalk between channels.


Subject(s)
Auditory Perception , Mental Processes , Models, Psychological , Psychoacoustics , Acoustic Stimulation/methods , Humans , Reaction Time
14.
J Cogn Neurosci ; 18(12): 1959-72, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17129184

ABSTRACT

Timbre is a multidimensional perceptual attribute of complex tones that characterizes the identity of a sound source. Our study explores the representation in auditory sensory memory of three timbre dimensions (acoustically related to attack time, spectral centroid, and spectrum fine structure), using the mismatch negativity (MMN) component of the auditory event-related potential. MMN is elicited by a discriminable change in a sound sequence and reflects the detection of the discrepancy between the current stimulus and traces in auditory sensory memory. The stimuli used in the present study were carefully controlled synthetic tones. MMNs were recorded after changes along each of the three timbre dimensions and their combinations. Additivity of unidimensional MMNs and dipole modeling results suggest partially separate MMN generators for different timbre dimensions, reflecting their mainly separate processing in auditory sensory memory. The results expand to timbre dimensions a property of separation of the representation in sensory memory that has already been reported between basic perceptual attributes (pitch, loudness, duration, and location) of sound sources.


Subject(s)
Auditory Perception/physiology , Loudness Perception/physiology , Memory/physiology , Pitch Discrimination/physiology , Adult , Data Interpretation, Statistical , Electroencephalography , Female , Humans , Male , Middle Aged , Models, Neurological
15.
Brain Res ; 1082(1): 142-52, 2006 Apr 12.
Article in English | MEDLINE | ID: mdl-16703673

ABSTRACT

Hearing one's own first name automatically elicits a robust electrophysiological response, even in conditions of reduced consciousness like sleep. In a search for objective clues to superior cognitive functions in comatose patients, we looked for an optimal auditory stimulation paradigm mobilizing a large population of neurons. Our hypothesis was that wider ERPs would be obtained in response to the subject's own name (SON) when a familiar person uttered it. In 15 healthy awake volunteers, we tested a passive oddball paradigm with three different novels presented with the same probability (P = 0.02): SON uttered by a familiar voice (FV) or by an unknown voice (NFV) and a non-vocal stimulus (NV) which preserved most of the physical characteristics of SON FV. ERP (32 electrodes) and scalp current density (SCD) maps were analyzed. SON appeared to generate more robust responses related to involuntary attention switching (MMN/N2b, novelty P3) than NV. When uttered by a familiar person, the SON elicited larger response amplitudes in the late phase of novelty P3 (after 300 ms). Most important differences were found in the late slow waves where two components could be temporally and spatially dissociated. A larger parietal component for FV than for NFV suggested deeper high-level processing, even if the subjects were not required to explicitly differentiate or recognize the voices. This passive protocol could therefore provide a valuable tool for clinicians to test residual superior cognitive functions in uncooperative patients.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Names , Recognition, Psychology/physiology , Voice , Acoustic Stimulation/methods , Adult , Analysis of Variance , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Reaction Time/physiology , Time Factors
16.
Exp Brain Res ; 166(3-4): 337-44, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16041497

ABSTRACT

The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.


Subject(s)
Auditory Perception/physiology , Memory/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Attention/physiology , Electroencephalography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Male , Photic Stimulation
17.
Neurosci Lett ; 379(2): 144-8, 2005 May 06.
Article in English | MEDLINE | ID: mdl-15823432

ABSTRACT

Event related potentials (ERPs) were recorded from subjects who had to perform either an identification task or a simple detection task on moving visual stimuli. Results showed that the amplitude of the so-called visual "N1" component was larger for identification than for mere detection, replicating previous data obtained with static stimuli. However, we also found that: (i) the onset, peak and offset latencies of the visual N1 to dynamic stimuli were significantly earlier in the detection task than in the identification task, and (ii) in both conditions, the coordinates of the equivalent current dipoles best explaining the visual N1 component were consistent with those of the human motion visual area MT+/V5 in the extrastriate cortex. Altogether, these results indicate that dynamic stimuli may activate (at least partly) different pathways and processes in extrastriate cortex according to the nature of the task required on these stimuli.


Subject(s)
Discrimination, Psychological/physiology , Evoked Potentials, Visual/physiology , Motion Perception/physiology , Reaction Time/physiology , Visual Cortex/physiology , Adult , Analysis of Variance , Brain Mapping , Electroencephalography/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Male , Photic Stimulation/methods , Visual Cortex/anatomy & histology , Visual Pathways/physiology
18.
Eur J Neurosci ; 20(8): 2225-34, 2004 Oct.
Article in English | MEDLINE | ID: mdl-15450102

ABSTRACT

While everyone has experienced that seeing lip movements may improve speech perception, little is known about the neural mechanisms by which audiovisual speech information is combined. Event-related potentials (ERPs) were recorded while subjects performed an auditory recognition task among four different natural syllables randomly presented in the auditory (A), visual (V) or congruent bimodal (AV) condition. We found that: (i) bimodal syllables were identified more rapidly than auditory alone stimuli; (ii) this behavioural facilitation was associated with cross-modal [AV-(A+V)] ERP effects around 120-190 ms latency, expressed mainly as a decrease of unimodal N1 generator activities in the auditory cortex. This finding provides evidence for suppressive, speech-specific audiovisual integration mechanisms, which are likely to be related to the dominance of the auditory modality for speech perception. Furthermore, the latency of the effect indicates that integration operates at pre-representational stages of stimulus analysis, probably via feedback projections from visual and/or polymodal areas.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Photic Stimulation/methods , Speech/physiology , Adult , Evoked Potentials/physiology , Female , Humans , Least-Squares Analysis , Male , Reaction Time/physiology
19.
Exp Brain Res ; 152(1): 79-86, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12879183

ABSTRACT

Higher cognitive processes include the ability to reliably transform sensory or mnemonic information. These processes either occur automatically or they are consciously controlled. To compare these two types of information processing, we developed a reaction time task that requires either a rule operation or else a direct sensory association. We were interested in evaluating the brain's electrical activity corresponding to both tasks, using event-related potentials (ERPs). In order to gain complete insight into the electrical activity of a stimulus-response segment, we analyzed the ERPs corresponding to the processing of the stimulus and the ERPs corresponding to the preparation of the response. To complete the analysis, we also evaluated the lateralized readiness potential (LRP) matched to the stimulus and to the response onset. Compared with the sensory association task, rule operation generated a higher negative potential field at frontocentral scalp areas in a latency range of 312-512 ms after the stimulus. In contrast, the LRP showed a negative component in the sensory association task which was absent during the rule operation; the latency of the difference was in the range 374-532 ms after the stimulus. The ERP component obtained by the response onset analysis was more negative in the rule condition up to a latency of -214 ms before the generation of the movement; the effect was localized at frontal and central scalp regions. We failed to find any significant difference in the LRP matched to the response onset. These results suggest that the brain computation of the rule operation takes place approximately in the middle of the stimulus-response time interval and that it is an additive process to the sensory association response.


Subject(s)
Evoked Potentials/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Reaction Time/physiology , Adult , Electroencephalography/methods , Humans
20.
Brain Res Cogn Brain Res ; 16(3): 383-90, 2003 May.
Article in English | MEDLINE | ID: mdl-12706218

ABSTRACT

The spatiotemporal dynamics of the cerebral network involved in novelty processing was studied by means of scalp current density (SCD) analysis of the novelty P3 (nP3) event-related brain potential (ERP). ERPs were recorded from 30 scalp electrodes at the occurrence of novel unpredictable environmental sounds during the performance of a visual discrimination task. Increased SCD was observed at left frontotemporal (FT3), bilateral temporoparietal (TP3 and TP4) and prefrontal locations (F8-F4 and F7-F3), suggesting novelty-P3 generators located in the left auditory cortex, and bilaterally in temporoparietal and prefrontal association regions. Additional increased SCD was found at a central location (Cz) and at superior parietal locations (P3-Pz-P4). The SCD of the nP3 was therefore generated at three successive, partially overlapping, stages of neuroelectric activation. At the central location, SCD started to be significant before the onset of the nP3 waveform, contributing solely to its early phase. At temporoparietal and left frontotemporal locations, nP3 electrophysiological activity was characterized by sustained current density, starting at about 210 ms and continuing during the full latency range of the response, including its early and late phases. At its late phase, the nP3 was characterized by sharp phasic current density at prefrontal and superior parietal locations, starting at about 290 ms and vanishing at around 385 ms. Taken together, these results provide the first evidence of the cerebral spatio-temporal dynamics underlying novelty processing.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory/physiology , Nerve Net/physiology , Acoustic Stimulation , Adult , Auditory Cortex/physiology , Discrimination, Psychological/physiology , Electrophysiology , Evoked Potentials/physiology , Female , Frontal Lobe/physiology , Humans , Male , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...