Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Neuroimage ; 244: 118556, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34492292

RESUMO

Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.


Assuntos
Atenção/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Motivação , Tempo de Reação , Percepção do Tempo , Adulto Jovem
2.
J Cogn Neurosci ; 31(3): 412-430, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30513045

RESUMO

In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top-down audiovisual object representations ("attentional templates") while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via "the N2pc component": spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150-300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030-1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top-down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170-270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top-down object representations, and so not only by separate sensory-specific top-down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top-down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Cognição/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Neuroimagem , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
3.
Neuroimage ; 181: 182-189, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30008430

RESUMO

Illusory contours (ICs) are perceptions of visual borders despite absent contrast gradients. The psychophysical and neurobiological mechanisms of IC processes have been studied across species and diverse brain imaging/mapping techniques. Nonetheless, debate continues regarding whether IC sensitivity results from a (presumably) feedforward process within low-level visual cortices (V1/V2) or instead are processed first within higher-order brain regions, such as lateral occipital cortices (LOC). Studies in animal models, which generally favour a feedforward mechanism within V1/V2, have typically involved stimuli inducing IC lines. By contrast, studies in humans generally favour a mechanism where IC sensitivity is mediated by LOC and have typically involved stimuli inducing IC forms or shapes. Thus, the particular stimulus features used may strongly contribute to the model of IC sensitivity supported. To address this, we recorded visual evoked potentials (VEPs) while presenting human observers with an array of 10 inducers within the central 5°, two of which could be oriented to induce an IC line on a given trial. VEPs were analysed using an electrical neuroimaging framework. Sensitivity to the presence vs. absence of centrally-presented IC lines was first apparent at ∼200 ms post-stimulus onset and was evident as topographic differences across conditions. We also localized these differences to the LOC. The timing and localization of these effects are consistent with a model of IC sensitivity commencing within higher-level visual cortices. We propose that prior observations of effects within lower-tier cortices (V1/V2) are the result of feedback from IC sensitivity that originates instead within higher-tier cortices (LOC).


Assuntos
Sensibilidades de Contraste/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Visuais/fisiologia , Neuroimagem Funcional/métodos , Ilusões/fisiologia , Lobo Occipital/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Lobo Occipital/diagnóstico por imagem , Córtex Visual/diagnóstico por imagem , Adulto Jovem
4.
Neuroimage ; 179: 480-488, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29959049

RESUMO

Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the ICeffect). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the ICeffect. We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The ICeffect was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients.


Assuntos
Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Som , Adulto Jovem
5.
NPJ Sci Learn ; 8(1): 61, 2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38102127

RESUMO

Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.

6.
Sci Rep ; 12(1): 9728, 2022 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-35710569

RESUMO

Dashboard-mounted touchscreen tablets are now common in vehicles. Screen/phone use in cars likely shifts drivers' attention away from the road and contributes to risk of accidents. Nevertheless, vision is subject to multisensory influences from other senses. Haptics may help maintain or even increase visual attention to the road, while still allowing for reliable dashboard control. Here, we provide a proof-of-concept for the effectiveness of digital haptic technologies (hereafter digital haptics), which use ultrasonic vibrations on a tablet screen to render haptic perceptions. Healthy human participants (N = 25) completed a divided-attention paradigm. The primary task was a centrally-presented visual conjunction search task, and the secondary task entailed control of laterally-presented sliders on the tablet. Sliders were presented visually, haptically, or visuo-haptically and were vertical, horizontal or circular. We reasoned that the primary task would be performed best when the secondary task was haptic-only. Reaction times (RTs) on the visual search task were fastest when the tablet task was haptic-only. This was not due to a speed-accuracy trade-off; there was no evidence for modulation of VST accuracy according to modality of the tablet task. These results provide the first quantitative support for introducing digital haptics into vehicle and similar contexts.


Assuntos
Tecnologia Háptica , Percepção Visual , Humanos , Visão Ocular
7.
Front Hum Neurosci ; 15: 702520, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34489663

RESUMO

The human brain has the astonishing capacity of integrating streams of sensory information from the environment and forming predictions about future events in an automatic way. Despite being initially developed for visual processing, the bulk of predictive coding research has subsequently focused on auditory processing, with the famous mismatch negativity signal as possibly the most studied signature of a surprise or prediction error (PE) signal. Auditory PEs are present during various consciousness states. Intriguingly, their presence and characteristics have been linked with residual levels of consciousness and return of awareness. In this review we first give an overview of the neural substrates of predictive processes in the auditory modality and their relation to consciousness. Then, we focus on different states of consciousness - wakefulness, sleep, anesthesia, coma, meditation, and hypnosis - and on what mysteries predictive processing has been able to disclose about brain functioning in such states. We review studies investigating how the neural signatures of auditory predictions are modulated by states of reduced or lacking consciousness. As a future outlook, we propose the combination of electrophysiological and computational techniques that will allow investigation of which facets of sensory predictive processes are maintained when consciousness fades away.

8.
Dev Cogn Neurosci ; 48: 100930, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33561691

RESUMO

Outside the laboratory, people need to pay attention to relevant objects that are typically multisensory, but it remains poorly understood how the underlying neurocognitive mechanisms develop. We investigated when adult-like mechanisms controlling one's attentional selection of visual and multisensory objects emerge across childhood. Five-, 7-, and 9-year-olds were compared with adults in their performance on a computer game-like multisensory spatial cueing task, while 129-channel EEG was simultaneously recorded. Markers of attentional control were behavioural spatial cueing effects and the N2pc ERP component (analysed traditionally and using a multivariate electrical neuroimaging framework). In behaviour, adult-like visual attentional control was present from age 7 onwards, whereas multisensory control was absent in all children groups. In EEG, multivariate analyses of the activity over the N2pc time-window revealed stable brain activity patterns in children. Adult-like visual-attentional control EEG patterns were present age 7 onwards, while multisensory control activity patterns were found in 9-year-olds (albeit behavioural measures showed no effects). By combining rigorous yet naturalistic paradigms with multivariate signal analyses, we demonstrated that visual attentional control seems to reach an adult-like state at ∼7 years, before adult-like multisensory control, emerging at ∼9 years. These results enrich our understanding of how attention in naturalistic settings develops.


Assuntos
Neuroimagem , Adulto , Percepção Auditiva , Criança , Pré-Escolar , Cognição , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Percepção Visual , Adulto Jovem
9.
Multisens Res ; 34(1): 1-15, 2020 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-33706283

RESUMO

Illusory contours (ICs) are borders that are perceived in the absence of contrast gradients. Until recently, IC processes were considered exclusively visual in nature and presumed to be unaffected by information from other senses. Electrophysiological data in humans indicates that sounds can enhance IC processes. Despite cross-modal enhancement being observed at the neurophysiological level, to date there is no evidence of direct amplification of behavioural performance in IC processing by sounds. We addressed this knowledge gap. Healthy adults ( n = 15) discriminated instances when inducers were arranged to form an IC from instances when no IC was formed (NC). Inducers were low-constrast and masked, and there was continuous background acoustic noise throughout a block of trials. On half of the trials, i.e., independently of IC vs NC, a 1000-Hz tone was presented synchronously with the inducer stimuli. Sound presence improved the accuracy of indicating when an IC was presented, but had no impact on performance with NC stimuli (significant IC presence/absence × Sound presence/absence interaction). There was no evidence that this was due to general alerting or to a speed-accuracy trade-off (no main effect of sound presence on accuracy rates and no comparable significant interaction on reaction times). Moreover, sound presence increased sensitivity and reduced bias on the IC vs NC discrimination task. These results demonstrate that multisensory processes augment mid-level visual functions, exemplified by IC processes. Aside from the impact on neurobiological and computational models of vision, our findings may prove clinically beneficial for low-vision or sight-restored patients.


Assuntos
Percepção Auditiva/fisiologia , Percepção de Forma/fisiologia , Ilusões/fisiologia , Adulto , Eletroencefalografia , Potenciais Evocados Visuais , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
10.
Front Neurosci ; 14: 197, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32265628

RESUMO

In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation.

11.
Artigo em Inglês | MEDLINE | ID: mdl-30930756

RESUMO

Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.

12.
Neurosci Conscious ; 2017(1): nix005, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-30042839

RESUMO

Hypnotic amnesia is a functional dissociation from awareness during which information from specific neural processes is unavailable to consciousness. We test the proposal that changes in topographic patterns of cortical oscillations in upper-alpha (10-12 Hz) band selectively inhibit the recall of memories during hypnotic amnesia by blocking availability of locally processed information at specific points in retrieval. Participants were prescreened for high or low hypnotic susceptibility. Following hypnotic induction, participants were presented with a series of 60 face stimuli and were required to identify affective expressions. Participants received a suggestion for amnesia for these faces. They were then presented with a set of 30 old and 30 new faces and identified each as old or new. Amnesia suggestion was lifted and recall tested using the remaining 30 old faces and another 30 new faces. Exact Low Resolution Brain Electromagnetic Tomography source analyses are reported for 64 channel event-related electroencephalogram recorded from highs showing reversible amnesia to old faces. For high-susceptible participants, the amnesia suggestion significantly increased old faces wrongly identified while for low-susceptible participants amnesia suggestion increased the new faces wrongly identified. There were no differences between high- and low-susceptible participants following reversal of the suggestion. For previously seen faces which were wrongly identified, compared to new faces correctly identified, (late) evoked upper-alpha is significantly higher in right BA7 in a region implicated in top-down executive control to assist recall of visual information. Lagged nonlinear connectivity between cortical sources in upper-alpha in the same condition showed significantly increased connectivity between right BA34 (parahippocampal gyrus) and right BAs 7, 20 and 22. Integration between these regions is essential for recall of recent faces. During amnesia, spatial and temporal coordination of upper-alpha appears to suppress integrated functioning of these regions (hence recall). These patterns were absent after reversal of amnesia suggestion.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA