Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
1.
Neuroimage ; 244: 118556, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34492292

RESUMEN

Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.


Asunto(s)
Atención/fisiología , Percepción Visual/fisiología , Adulto , Señales (Psicología) , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Motivación , Tiempo de Reacción , Percepción del Tiempo , Adulto Joven
2.
J Neurosci ; 40(29): 5604-5615, 2020 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-32499378

RESUMEN

Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Reconocimiento en Psicología/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
3.
Neuropsychologia ; 144: 107498, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32442445

RESUMEN

Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Sonido , Corteza Visual/fisiología , Estimulación Acústica , Acústica , Adulto , Sesgo Atencional , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
4.
J Cogn Neurosci ; 31(3): 360-376, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-29488852

RESUMEN

Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Discriminación en Psicología/fisiología , Potenciales Evocados/fisiología , Percepción de Movimiento/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Individualidad , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
5.
Cereb Cortex ; 29(2): 475-484, 2019 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-29365070

RESUMEN

The perception of an acoustic rhythm is invariant to the absolute temporal intervals constituting a sound sequence. It is unknown where in the brain temporal Gestalt, the percept emerging from the relative temporal proximity between acoustic events, is encoded. Two different relative temporal patterns, each induced by three experimental conditions with different absolute temporal patterns as sensory basis, were presented to participants. A linear support vector machine classifier was trained to differentiate activation patterns in functional magnetic resonance imaging data to the two different percepts. Across the sensory constituents the classifier decoded which percept was perceived. A searchlight analysis localized activation patterns specific to the temporal Gestalt bilaterally to the temporoparietal junction, including the planum temporale and supramarginal gyrus, and unilaterally to the right inferior frontal gyrus (pars opercularis). We show that auditory areas not only process absolute temporal intervals, but also integrate them into percepts of Gestalt and that encoding of these percepts persists in high-level associative areas. The findings complement existing knowledge regarding the processing of absolute temporal patterns to the processing of relative temporal patterns relevant to the sequential binding of perceptual elements into Gestalt.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Percepción del Tiempo/fisiología , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Distribución Aleatoria , Adulto Joven
6.
J Cogn Neurosci ; 31(3): 412-430, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30513045

RESUMEN

In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top-down audiovisual object representations ("attentional templates") while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via "the N2pc component": spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150-300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030-1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top-down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170-270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top-down object representations, and so not only by separate sensory-specific top-down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top-down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Cognición/fisiología , Percepción Visual/fisiología , Adulto , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Neuroimagen , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
7.
Neuroimage ; 179: 480-488, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29959049

RESUMEN

Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the ICeffect). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the ICeffect. We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The ICeffect was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients.


Asunto(s)
Encéfalo/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa , Sonido , Adulto Joven
8.
Sci Rep ; 8(1): 8901, 2018 06 11.
Artículo en Inglés | MEDLINE | ID: mdl-29891964

RESUMEN

Multisensory information typically confers neural and behavioural advantages over unisensory information. We used a simple audio-visual detection task to compare healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals. Neuropsychological tests assessed individuals' learning and memory impairments. First, we provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. The pattern of sensory dominance shifted with healthy and abnormal aging to favour a propensity of auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were also visually-dominant. Second, we demonstrate that the multisensory detection task offers benefits as a time- and resource-economic MCI screening tool. Receiver operating characteristic (ROC) analysis demonstrated that MCI diagnosis could be reliably achieved based on the combination of indices of multisensory integration together with indices of sensory dominance. Our findings showcase the importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, our findings open an exciting possibility for multisensory detection tasks to be used as a cost-effective screening tool. These findings clarify relationships between multisensory and memory functions in aging, while offering new avenues for improved dementia diagnostics.


Asunto(s)
Envejecimiento/patología , Envejecimiento/fisiología , Disfunción Cognitiva/diagnóstico , Tamizaje Masivo/métodos , Pruebas Neuropsicológicas , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Percepción Auditiva , Femenino , Humanos , Masculino , Estimulación Luminosa , Curva ROC , Percepción Visual , Adulto Joven
9.
Neuroimage ; 176: 29-40, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29678759

RESUMEN

Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Localización de Sonidos/fisiología , Espectrografía del Sonido , Adulto Joven
10.
Schizophr Res ; 191: 80-86, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28711476

RESUMEN

Sensory impairments constitute core dysfunctions in schizophrenia. In the auditory modality, impaired mismatch negativity (MMN) has been observed in chronic schizophrenia and may reflect N-methyl-d-aspartate (NMDA) hypo-function, consistent with models of schizophrenia based on oxidative stress. Moreover, a recent study demonstrated deficits in the N100 component of the auditory evoked potential (AEP) in early psychosis patients. Previous work has shown that add-on administration of the glutathione precursor N-acetyl-cysteine (NAC) improves the MMN and clinical symptoms in chronic schizophrenia. To date, it remains unknown whether NAC also improves general low-level auditory processing and if its efficacy would extend to early-phase psychosis. We addressed these issues with a randomized, double-blind study of a small sample (N=15) of early psychosis (EP) patients and 18 healthy controls from whom AEPs were recorded during an active, auditory oddball task. Patients were recorded twice: once prior to NAC/placebo administration and once after six months of treatment. The N100 component was significantly smaller in patients before NAC administration versus controls. Critically, NAC administration improved this AEP deficit. Source estimations revealed increased activity in the left temporo-parietal lobe in patients after NAC administration. Overall, the data from this pilot study, which call for replication in a larger sample, indicate that NAC improves low-level auditory processing in early psychosis.


Asunto(s)
Acetilcisteína/uso terapéutico , Antipsicóticos/uso terapéutico , Variación Contingente Negativa/efectos de los fármacos , Potenciales Evocados Auditivos/efectos de los fármacos , Trastornos Psicóticos/tratamiento farmacológico , Acetilcisteína/farmacología , Estimulación Acústica , Adulto , Antipsicóticos/farmacología , Método Doble Ciego , Electroencefalografía , Femenino , Estudios de Seguimiento , Humanos , Masculino , Proyectos Piloto , Adulto Joven
11.
Psychophysiology ; 54(11): 1663-1675, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28752567

RESUMEN

Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados Visuales/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
12.
Arch Phys Med Rehabil ; 98(8): 1628-1635.e2, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28499657

RESUMEN

OBJECTIVE: To evaluate the effects of electrically assisted movement therapy (EAMT) in which patients use functional electrical stimulation, modulated by a custom device controlled through the patient's unaffected hand, to produce or assist task-specific upper limb movements, which enables them to engage in intensive goal-oriented training. DESIGN: Randomized, crossover, assessor-blinded, 5-week trial with follow-up at 18 weeks. SETTING: Rehabilitation university hospital. PARTICIPANTS: Patients with chronic, severe stroke (N=11; mean age, 47.9y) more than 6 months poststroke (mean time since event, 46.3mo). INTERVENTIONS: Both EAMT and the control intervention (dose-matched, goal-oriented standard care) consisted of 10 sessions of 90 minutes per day, 5 sessions per week, for 2 weeks. After the first 10 sessions, group allocation was crossed over, and patients received a 1-week therapy break before receiving the new treatment. MAIN OUTCOME MEASURES: Fugl-Meyer Motor Assessment for the Upper Extremity, Wolf Motor Function Test, spasticity, and 28-item Motor Activity Log. RESULTS: Forty-four individuals were recruited, of whom 11 were eligible and participated. Five patients received the experimental treatment before standard care, and 6 received standard care before the experimental treatment. EAMT produced higher improvements in the Fugl-Meyer scale than standard care (P<.05). Median improvements were 6.5 Fugl-Meyer points and 1 Fugl-Meyer point after the experimental treatment and standard care, respectively. The improvement was also significant in subjective reports of quality of movement and amount of use of the affected limb during activities of daily living (P<.05). CONCLUSIONS: EAMT produces a clinically important impairment reduction in stroke patients with chronic, severe upper limb paresis.


Asunto(s)
Terapia por Estimulación Eléctrica/métodos , Prótesis Neurales , Paresia/rehabilitación , Rehabilitación de Accidente Cerebrovascular/métodos , Extremidad Superior , Actividades Cotidianas , Adolescente , Adulto , Anciano , Enfermedad Crónica , Estudios Cruzados , Terapia por Estimulación Eléctrica/instrumentación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Proyectos Piloto , Recuperación de la Función , Índice de Severidad de la Enfermedad , Método Simple Ciego , Rehabilitación de Accidente Cerebrovascular/instrumentación , Adulto Joven
13.
Neuroimage ; 125: 996-1004, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26564531

RESUMEN

Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~250ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
14.
Hum Brain Mapp ; 37(1): 273-88, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26466522

RESUMEN

This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Potenciales Evocados/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Análisis de Varianza , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Masculino , Dinámicas no Lineales , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
15.
Neuroimage ; 118: 163-73, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26070264

RESUMEN

Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Discriminación en Psicología/fisiología , Aprendizaje/fisiología , Semántica , Estimulación Acústica , Adulto , Animales , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Lóbulo Frontal/fisiología , Giro del Cíngulo/fisiología , Humanos , Masculino , Patrones de Reconocimiento Fisiológico/fisiología , Lóbulo Temporal/fisiología , Vocalización Animal , Adulto Joven
16.
Neuroimage ; 113: 133-42, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25812716

RESUMEN

Although neuroimaging research has evidenced specific responses to visual food stimuli based on their nutritional quality (e.g., energy density, fat content), brain processes underlying portion size selection remain largely unexplored. We identified spatio-temporal brain dynamics in response to meal images varying in portion size during a task of ideal portion selection for prospective lunch intake and expected satiety. Brain responses to meal portions judged by the participants as 'too small', 'ideal' and 'too big' were measured by means of electro-encephalographic (EEG) recordings in 21 normal-weight women. During an early stage of meal viewing (105-145 ms), data showed an incremental increase of the head-surface global electric field strength (quantified via global field power; GFP) as portion judgments ranged from 'too small' to 'too big'. Estimations of neural source activity revealed that brain regions underlying this effect were located in the insula, middle frontal gyrus and middle temporal gyrus, and are similar to those reported in previous studies investigating responses to changes in food nutritional content. In contrast, during a later stage (230-270 ms), GFP was maximal for the 'ideal' relative to the 'non-ideal' portion sizes. Greater neural source activity to 'ideal' vs. 'non-ideal' portion sizes was observed in the inferior parietal lobule, superior temporal gyrus and mid-posterior cingulate gyrus. Collectively, our results provide evidence that several brain regions involved in attention and adaptive behavior track 'ideal' meal portion sizes as early as 230 ms during visual encounter. That is, responses do not show an increase paralleling the amount of food viewed (and, in extension, the amount of reward), but are shaped by regulatory mechanisms.


Asunto(s)
Encéfalo/fisiología , Ingestión de Alimentos/fisiología , Ingestión de Alimentos/psicología , Comidas/psicología , Adulto , Actitud , Peso Corporal , Corteza Cerebral/fisiología , Electroencefalografía , Femenino , Lóbulo Frontal/fisiología , Humanos , Juicio , Valor Nutritivo , Lóbulo Parietal/fisiología , Respuesta de Saciedad/fisiología , Lóbulo Temporal/fisiología
17.
J Clin Neurophysiol ; 31(4): 356-61, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25083848

RESUMEN

PURPOSE: EEG and somatosensory evoked potential are highly predictive of poor outcome after cardiac arrest; their accuracy for good recovery is however low. We evaluated whether addition of an automated mismatch negativity-based auditory discrimination paradigm (ADP) to EEG and somatosensory evoked potential improves prediction of awakening. METHODS: EEG and ADP were prospectively recorded in 30 adults during therapeutic hypothermia and in normothermia. We studied the progression of auditory discrimination on single-trial multivariate analyses from therapeutic hypothermia to normothermia, and its correlation to outcome at 3 months, assessed with cerebral performance categories. RESULTS: At 3 months, 18 of 30 patients (60%) survived; 5 had severe neurologic impairment (cerebral performance categories = 3) and 13 had good recovery (cerebral performance categories = 1-2). All 10 subjects showing improvements of auditory discrimination from therapeutic hypothermia to normothermia regained consciousness: ADP was 100% predictive for awakening. The addition of ADP significantly improved mortality prediction (area under the curve, 0.77 for standard model including clinical examination, EEG, somatosensory evoked potential, versus 0.86 after adding ADP, P = 0.02). CONCLUSIONS: This automated ADP significantly improves early coma prognostic accuracy after cardiac arrest and therapeutic hypothermia. The progression of auditory discrimination is strongly predictive of favorable recovery and appears complementary to existing prognosticators of poor outcome. Before routine implementation, validation on larger cohorts is warranted.


Asunto(s)
Coma/diagnóstico , Coma/etiología , Variación Contingente Negativa/fisiología , Paro Cardíaco/complicaciones , Paro Cardíaco/terapia , Hipotermia Inducida/métodos , Estimulación Acústica , Anciano , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Estudios Retrospectivos
18.
J Cogn Neurosci ; 25(7): 1122-35, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23384192

RESUMEN

Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Sesgo , Corteza Visual/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Femenino , Lateralidad Funcional , Humanos , Masculino , Fosfenos/fisiología , Estimulación Luminosa , Detección de Señal Psicológica , Estimulación Magnética Transcraneal , Adulto Joven
19.
Neuroimage ; 73: 40-9, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23357069

RESUMEN

For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked.


Asunto(s)
Percepción Auditiva/fisiología , Reconocimiento en Psicología/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Interpretación Estadística de Datos , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Programas Informáticos , Adulto Joven
20.
Neuroimage ; 62(3): 1478-88, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22609795

RESUMEN

Multisensory experiences influence subsequent memory performance and brain responses. Studies have thus far concentrated on semantically congruent pairings, leaving unresolved the influence of stimulus pairing and memory sub-types. Here, we paired images with unique, meaningless sounds during a continuous recognition task to determine if purely episodic, single-trial multisensory experiences can incidentally impact subsequent visual object discrimination. Psychophysics and electrical neuroimaging analyses of visual evoked potentials (VEPs) compared responses to repeated images either paired or not with a meaningless sound during initial encounters. Recognition accuracy was significantly impaired for images initially presented as multisensory pairs and could not be explained in terms of differential attention or transfer of effects from encoding to retrieval. VEP modulations occurred at 100-130 ms and 270-310 ms and stemmed from topographic differences indicative of network configuration changes within the brain. Distributed source estimations localized the earlier effect to regions of the right posterior temporal gyrus (STG) and the later effect to regions of the middle temporal gyrus (MTG). Responses in these regions were stronger for images previously encountered as multisensory pairs. Only the later effect correlated with performance such that greater MTG activity in response to repeated visual stimuli was linked with greater performance decrements. The present findings suggest that brain networks involved in this discrimination may critically depend on whether multisensory events facilitate or impair later visual memory performance. More generally, the data support models whereby effects of multisensory interactions persist to incidentally affect subsequent behavior as well as visual processing during its initial stages.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Potenciales Evocados Visuales/fisiología , Memoria/fisiología , Reconocimiento en Psicología/fisiología , Estimulación Acústica , Adulto , Percepción Auditiva/fisiología , Electroencefalografía , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Masculino , Estimulación Luminosa , Percepción Visual/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA