Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
PLoS Comput Biol ; 20(8): e1012288, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39093852

ABSTRACT

Sounds are temporal stimuli decomposed into numerous elementary components by the auditory nervous system. For instance, a temporal to spectro-temporal transformation modelling the frequency decomposition performed by the cochlea is a widely adopted first processing step in today's computational models of auditory neural responses. Similarly, increments and decrements in sound intensity (i.e., of the raw waveform itself or of its spectral bands) constitute critical features of the neural code, with high behavioural significance. However, despite the growing attention of the scientific community on auditory OFF responses, their relationship with transient ON, sustained responses and adaptation remains unclear. In this context, we propose a new general model, based on a pair of linear filters, named AdapTrans, that captures both sustained and transient ON and OFF responses into a unifying and easy to expand framework. We demonstrate that filtering audio cochleagrams with AdapTrans permits to accurately render known properties of neural responses measured in different mammal species such as the dependence of OFF responses on the stimulus fall time and on the preceding sound duration. Furthermore, by integrating our framework into gold standard and state-of-the-art machine learning models that predict neural responses from audio stimuli, following a supervised training on a large compilation of electrophysiology datasets (ready-to-deploy PyTorch models and pre-processed datasets shared publicly), we show that AdapTrans systematically improves the prediction accuracy of estimated responses within different cortical areas of the rat and ferret auditory brain. Together, these results motivate the use of our framework for computational and systems neuroscientists willing to increase the plausibility and performances of their models of audition.


Subject(s)
Acoustic Stimulation , Computational Biology , Models, Neurological , Animals , Rats , Cochlea/physiology , Auditory Perception/physiology , Ferrets , Evoked Potentials, Auditory/physiology , Adaptation, Physiological/physiology , Humans , Machine Learning
2.
Front Aging Neurosci ; 16: 1326435, 2024.
Article in English | MEDLINE | ID: mdl-38450381

ABSTRACT

Perceptual learning (PL) has shown promise in enhancing residual visual functions in patients with age-related macular degeneration (MD), however it requires prolonged training and evidence of generalization to untrained visual functions is limited. Recent studies suggest that combining transcranial random noise stimulation (tRNS) with perceptual learning produces faster and larger visual improvements in participants with normal vision. Thus, this approach might hold the key to improve PL effects in MD. To test this, we trained two groups of MD participants on a contrast detection task with (n = 5) or without (n = 7) concomitant occipital tRNS. The training consisted of a lateral masking paradigm in which the participant had to detect a central low contrast Gabor target. Transfer tasks, including contrast sensitivity, near and far visual acuity, and visual crowding, were measured at pre-, mid and post-tests. Combining tRNS and perceptual learning led to greater improvements in the trained task, evidenced by a larger increment in contrast sensitivity and reduced inhibition at the shortest target to flankers' distance. The overall amount of transfer was similar between the two groups. These results suggest that coupling tRNS and perceptual learning has promising potential applications as a clinical rehabilitation strategy to improve vision in MD patients.

3.
J Vis ; 24(1): 3, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38190145

ABSTRACT

Visual scene perception is based on reciprocal interactions between central and peripheral information. Such interactions are commonly investigated through the semantic congruence effect, which usually reveals a congruence effect of central vision on peripheral vision as strong as the reverse. The aim of the present study was to further investigate the mechanisms underlying central-peripheral visual interactions using a central-peripheral congruence paradigm through three behavioral experiments. We presented simultaneously a central and a peripheral stimulus, that could be either semantically congruent or incongruent. To assess the congruence effect of central vision on peripheral vision, participants had to categorize the peripheral target stimulus while ignoring the central distractor stimulus. To assess the congruence effect of the peripheral vision on central vision, they had to categorize the central target stimulus while ignoring the peripheral distractor stimulus. Experiment 1 revealed that the physical distance between central and peripheral stimuli influences central-peripheral visual interactions: Congruence effect of central vision is stronger when the distance between the target and the distractor is the shortest. Experiments 2 and 3 revealed that the spatial frequency content of distractors also influence central-peripheral interactions: Congruence effect of central vision is observed only when the distractor contained high spatial frequencies while congruence effect of peripheral vision is observed only when the distractor contained low spatial frequencies. These results raise the question of how these influences are exerted (bottom-up vs. top-down) and are discussed based on the retinocortical properties of the visual system and the predictive brain hypothesis.


Subject(s)
Brain , Visual Perception , Humans , Semantics
4.
Sci Rep ; 13(1): 15312, 2023 09 15.
Article in English | MEDLINE | ID: mdl-37714896

ABSTRACT

Aging impacts human observer's performance in a wide range of visual tasks and notably in motion discrimination. Despite numerous studies, we still poorly understand how optic flow processing is impacted in healthy older adults. Here, we estimated motion coherence thresholds in two groups of younger (age: 18-30, n = 42) and older (70-90, n = 42) adult participants for the three components of optic flow (translational, radial and rotational patterns). Stimuli were dynamic random-dot kinematograms (RDKs) projected on a large screen. Participants had to report their perceived direction of motion (leftward versus rightward for translational, inward versus outward for radial and clockwise versus anti-clockwise for rotational patterns). Stimuli had an average speed of 7°/s (additional recordings were performed at 14°/s) and were either presented full-field or in peripheral vision. Statistical analyses showed that thresholds in older adults were similar to those measured in younger participants for translational patterns, thresholds for radial patterns were significantly increased in our slowest condition and thresholds for rotational patterns were significantly decreased. Altogether, these findings support the idea that aging does not lead to a general decline in visual perception but rather has specific effects on the processing of each optic flow component.


Subject(s)
Optic Flow , Humans , Aged , Adolescent , Young Adult , Adult , Visual Perception , Aging , Health Status , Motion
5.
Front Neurosci ; 17: 1160034, 2023.
Article in English | MEDLINE | ID: mdl-37250425

ABSTRACT

Event-based cameras are raising interest within the computer vision community. These sensors operate with asynchronous pixels, emitting events, or "spikes", when the luminance change at a given pixel since the last event surpasses a certain threshold. Thanks to their inherent qualities, such as their low power consumption, low latency, and high dynamic range, they seem particularly tailored to applications with challenging temporal constraints and safety requirements. Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since the coupling of an asynchronous sensor with neuromorphic hardware can yield real-time systems with minimal power requirements. In this work, we seek to develop one such system, using both event sensor data from the DSEC dataset and spiking neural networks to estimate optical flow for driving scenarios. We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations. To do so, we encourage both minimal norm for the error vector and minimal angle between ground-truth and predicted flow, training our model with back-propagation using a surrogate gradient. In addition, the use of 3d convolutions allows us to capture the dynamic nature of the data by increasing the temporal receptive fields. Upsampling after each decoding stage ensures that each decoder's output contributes to the final estimation. Thanks to separable convolutions, we have been able to develop a light model (when compared to competitors) that can nonetheless yield reasonably accurate optical flow estimates.

6.
Biol Cybern ; 117(1-2): 95-111, 2023 04.
Article in English | MEDLINE | ID: mdl-37004546

ABSTRACT

Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.


Subject(s)
Models, Neurological , Neuronal Plasticity , Humans , Neuronal Plasticity/physiology , Neural Networks, Computer , Learning/physiology , Visual Perception/physiology
7.
Neuroimage ; 270: 119959, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36822249

ABSTRACT

Non-human primate (NHP) neuroimaging can provide essential insights into the neural basis of human cognitive functions. While functional magnetic resonance imaging (fMRI) localizers can play an essential role in reaching this objective (Russ et al., 2021), they often differ substantially across species in terms of paradigms, measured signals, and data analysis, biasing the comparisons. Here we introduce a functional frequency-tagging face localizer for NHP imaging, successfully developed in humans and outperforming standard face localizers (Gao et al., 2018). FMRI recordings were performed in two awake macaques. Within a rapid 6 Hz stream of natural non-face objects images, human or monkey face stimuli were presented in bursts every 9 s. We also included control conditions with phase-scrambled versions of all images. As in humans, face-selective activity was objectively identified and quantified at the peak of the face-stimulation frequency (0.111 Hz) and its second harmonic (0.222 Hz) in the Fourier domain. Focal activations with a high signal-to-noise ratio were observed in regions previously described as face-selective, mainly in the STS (clusters PL, ML, MF; also, AL, AF), both for human and monkey faces. Robust face-selective activations were also found in the prefrontal cortex of one monkey (PVL and PO clusters). Face-selective neural activity was highly reliable and excluded all contributions from low-level visual cues contained in the amplitude spectrum of the stimuli. These observations indicate that fMRI frequency-tagging provides a highly valuable approach to objectively compare human and monkey visual recognition systems within the same framework.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Animals , Humans , Magnetic Resonance Imaging/methods , Neuroimaging , Recognition, Psychology , Macaca , Pattern Recognition, Visual/physiology , Photic Stimulation/methods
8.
Invest Ophthalmol Vis Sci ; 63(12): 21, 2022 11 01.
Article in English | MEDLINE | ID: mdl-36378131

ABSTRACT

Purpose: Optic flow processing was characterized in patients with macular degeneration (MD). Methods: Twelve patients with dense bilateral scotomas and 12 age- and gender-matched control participants performed psychophysical experiments. Stimuli were dynamic random-dot kinematograms projected on a large screen. For each component of optic flow (translational, radial, and rotational), we estimated motion coherence discrimination thresholds in our participants using an adaptive Bayesian procedure. Results: Thresholds for translational, rotational, and radial patterns were comparable between patients and their matched control participants. A negative correlation was observed in patients between the time since MD diagnosis and coherence thresholds for translational patterns. Conclusions: Our results suggest that in patients with MD, selectivity to optic flow patterns is preserved.


Subject(s)
Macular Degeneration , Motion Perception , Optic Flow , Humans , Bayes Theorem , Macular Degeneration/diagnosis , Scotoma/diagnosis , Scotoma/etiology , Photic Stimulation/methods
9.
Cereb Cortex ; 32(10): 2277-2290, 2022 05 14.
Article in English | MEDLINE | ID: mdl-34617100

ABSTRACT

Symmetry is a highly salient feature of the natural world that is perceived by many species. In humans, the cerebral areas processing symmetry are now well identified from neuroimaging measurements. Macaque could constitute a good animal model to explore the underlying neural mechanisms, but a previous comparative study concluded that functional magnetic resonance imaging responses to mirror symmetry in this species were weaker than those observed in humans. Here, we re-examined symmetry processing in macaques from a broader perspective, using both rotation and reflection symmetry embedded in regular textures. Highly consistent responses to symmetry were found in a large network of areas (notably in areas V3 and V4), in line with what was reported in humans under identical experimental conditions. Our results suggest that the cortical networks that process symmetry in humans and macaques are potentially more similar than previously reported and point toward macaque as a relevant model for understanding symmetry processing.


Subject(s)
Macaca , Visual Cortex , Animals , Brain Mapping/methods , Magnetic Resonance Imaging/methods , Rotation , Visual Cortex/diagnostic imaging , Visual Cortex/physiology
10.
Front Neurosci ; 15: 727448, 2021.
Article in English | MEDLINE | ID: mdl-34602970

ABSTRACT

The early visual cortex is the site of crucial pre-processing for more complex, biologically relevant computations that drive perception and, ultimately, behaviour. This pre-processing is often studied under the assumption that neural populations are optimised for the most efficient (in terms of energy, information, spikes, etc.) representation of natural statistics. Normative models such as Independent Component Analysis (ICA) and Sparse Coding (SC) consider the phenomenon as a generative, minimisation problem which they assume the early cortical populations have evolved to solve. However, measurements in monkey and cat suggest that receptive fields (RFs) in the primary visual cortex are often noisy, blobby, and symmetrical, making them sub-optimal for operations such as edge-detection. We propose that this suboptimality occurs because the RFs do not emerge through a global minimisation of generative error, but through locally operating biological mechanisms such as spike-timing dependent plasticity (STDP). Using a network endowed with an abstract, rank-based STDP rule, we show that the shape and orientation tuning of the converged units are remarkably close to single-cell measurements in the macaque primary visual cortex. We quantify this similarity using physiological parameters (frequency-normalised spread vectors), information theoretic measures [Kullback-Leibler (KL) divergence and Gini index], as well as simulations of a typical electrophysiology experiment designed to estimate orientation tuning curves. Taken together, our results suggest that compared to purely generative schemes, process-based biophysical models may offer a better description of the suboptimality observed in the early visual cortex.

11.
Front Comput Neurosci ; 15: 658764, 2021.
Article in English | MEDLINE | ID: mdl-34108870

ABSTRACT

In recent years, event-based sensors have been combined with spiking neural networks (SNNs) to create a new generation of bio-inspired artificial vision systems. These systems can process spatio-temporal data in real time, and are highly energy efficient. In this study, we used a new hybrid event-based camera in conjunction with a multi-layer spiking neural network trained with a spike-timing-dependent plasticity learning rule. We showed that neurons learn from repeated and correlated spatio-temporal patterns in an unsupervised way and become selective to motion features, such as direction and speed. This motion selectivity can then be used to predict ball trajectory by adding a simple read-out layer composed of polynomial regressions, and trained in a supervised manner. Hence, we show that a SNN receiving inputs from an event-based sensor can extract relevant spatio-temporal patterns to process and predict ball trajectories.

12.
Brain Struct Funct ; 226(9): 2897-2909, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34120262

ABSTRACT

As we plan to reach or manipulate objects, we generally orient our body so as to face them. Other objects occupying the same portion of space will likely represent potential obstacles for the intended action. Thus, either as targets or as obstacles, the objects located straight in front of us are often endowed with a special behavioral status. Here, we review a set of recent electrophysiological, imaging and behavioral studies bringing converging evidence that the objects which lie straight-ahead are subject to privileged visual processing. More precisely, these works collectively demonstrate that when gaze steers central vision away from the straight-ahead direction, the latter is still prioritized in peripheral vision. Straight-ahead objects evoke (1) stronger neuronal responses in macaque peripheral V1 neurons, (2) stronger EEG and fMRI activations across the human visual cortex and (3) faster reactive hand and eye movements. Here, we discuss the functional implications and underlying mechanisms behind this phenomenon. Notably, we propose that it can be considered as a new type of visuospatial attentional mechanism, distinct from the previously documented classes of endogenous and exogenous attention.


Subject(s)
Vision, Ocular , Visual Cortex , Animals , Macaca , Photic Stimulation , Visual Cortex/diagnostic imaging , Visual Perception
13.
Brain Struct Funct ; 225(8): 2447-2461, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32875354

ABSTRACT

We investigated the visuotopic organization of macaque posterior parietal cortex (PPC) by combining functional imaging (fMRI) and wide-field retinotopic mapping in two macaque monkeys. Whole brain blood-oxygen-level-dependent (BOLD) signal was recorded while monkeys maintained central fixation during the presentation of large rotating wedges and expending/contracting annulus of a "shaking" fruit basket, designed to maximize the recruitment of PPC neurons. Results of the surface-based population receptive field (pRF) analysis reveal a new cluster of four visuotopic areas at the confluence of the parieto-occipital and intra-parietal sulci, in a location previously defined histologically and anatomically as the posterior intra-parietal (PIP) region. This PIP cluster groups together two recently described areas (CIP1/2) laterally and two newly identified ones (PIP1/2) medially, whose foveal representations merge in the fundus of the intra-parietal sulcus. The cluster shares borders with other visuotopic areas: V3d posteriorly, V3A/DP laterally, V6/V6A medially and LIP anteriorly. Together, these results show that monkey PPC is endowed with a dense set of visuotopic areas, as its human counterpart. The fact that fMRI and wide-field stimulation allows a functional parsing of monkey PPC offers a new framework for studying functional homologies with human PPC.


Subject(s)
Fixation, Ocular/physiology , Parietal Lobe/diagnostic imaging , Visual Pathways/diagnostic imaging , Animals , Brain Mapping/methods , Female , Image Processing, Computer-Assisted , Macaca mulatta , Magnetic Resonance Imaging , Neurons/physiology , Parietal Lobe/physiology , Photic Stimulation , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Visual Pathways/physiology
14.
Front Integr Neurosci ; 14: 43, 2020.
Article in English | MEDLINE | ID: mdl-32848650

ABSTRACT

Visuo-vestibular integration is crucial for locomotion, yet the cortical mechanisms involved remain poorly understood. We combined binaural monopolar galvanic vestibular stimulation (GVS) and functional magnetic resonance imaging (fMRI) to characterize the cortical networks activated during antero-posterior and lateral stimulations in humans. We focused on functional areas that selectively respond to egomotion-consistent optic flow patterns: the human middle temporal complex (hMT+), V6, the ventral intraparietal (VIP) area, the cingulate sulcus visual (CSv) area and the posterior insular cortex (PIC). Areas hMT+, CSv, and PIC were equivalently responsive during lateral and antero-posterior GVS while areas VIP and V6 were highly activated during antero-posterior GVS, but remained silent during lateral GVS. Using psychophysiological interaction (PPI) analyses, we confirmed that a cortical network including areas V6 and VIP is engaged during antero-posterior GVS. Our results suggest that V6 and VIP play a specific role in processing multisensory signals specific to locomotion during navigation.

15.
Vision Res ; 176: 27-39, 2020 11.
Article in English | MEDLINE | ID: mdl-32771554

ABSTRACT

The statistics of our environment impact not only our behavior, but also the selectivity and connectivity of the early sensory cortices. Over the last fifty years, powerful theories such as efficient coding, sparse coding, and the infomax principle have been proposed to explain the nature of this influence. Numerous computational and theoretical studies have since demonstrated solid, testable evidence in support of these theories, especially in the visual domain. However, most such work has concentrated on monocular, luminance-field descriptions of natural scenes, and studies that systematically focus on binocular processing of realistic visual input have only been conducted over the past two decades. In this review, we discuss the most recent of these binocular computational studies, with particular emphasis on disparity selectivity. We begin with a report of the relevant literature demonstrating concrete evidence for the relationship between natural disparity statistics, neural selectivity, and behavior. This is followed by a discussion of supervised and unsupervised computational studies. For each study, we include a description of the input data, theoretical principles employed in the models, and the contribution of the results in explaining biological data (neural and behavioral). In the discussion, we compare these models to the binocular energy model, and examine their application to the modelling of normal and abnormal development of vision. We conclude with a short description of what we believe are the most important limitations of the current state-of-the-art, and directions for future work which could address these shortcomings and enrich current and future models.


Subject(s)
Vision Disparity , Vision, Binocular , Environment , Humans
16.
Cereb Cortex ; 30(8): 4528-4543, 2020 06 30.
Article in English | MEDLINE | ID: mdl-32227117

ABSTRACT

The cortical areas that process disparity-defined motion-in-depth (i.e., cyclopean stereomotion [CSM]) were characterized with functional magnetic resonance imaging (fMRI) in two awake, behaving macaques. The experimental protocol was similar to previous human neuroimaging studies. We contrasted the responses to dynamic random-dot patterns that continuously changed their binocular disparity over time with those to a control condition that shared the same properties, except that the temporal frames were shuffled. A whole-brain voxel-wise analysis revealed that in all four cortical hemispheres, three areas showed consistent sensitivity to CSM. Two of them were localized respectively in the lower bank of the superior temporal sulcus (CSMSTS) and on the neighboring infero-temporal gyrus (CSMITG). The third area was situated in the posterior parietal cortex (CSMPPC). Additional regions of interest-based analyses within retinotopic areas defined in both animals indicated weaker but significant responses to CSM within the MT cluster (most notably in areas MSTv and FST). Altogether, our results are in agreement with previous findings in both human and macaque and suggest that the cortical areas that process CSM are relatively well preserved between the two primate species.


Subject(s)
Cerebral Cortex/physiology , Motion Perception/physiology , Visual Pathways/physiology , Animals , Brain Mapping , Female , Macaca mulatta , Magnetic Resonance Imaging
17.
Brain Struct Funct ; 225(1): 173-186, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31792695

ABSTRACT

The objects located straight-ahead of the body are preferentially processed by the visual system. They are more rapidly detected and evoke stronger BOLD responses in early visual areas than elements that are retinotopically identical but located at eccentric spatial positions. To characterize the dynamics of the underlying neural mechanisms, we recorded in 29 subjects the EEG responses to peripheral targets differing solely by their locations with respect to the body. Straight-ahead stimuli led to stronger responses than eccentric stimuli for several components whose latencies ranged between 70 and 350 ms after stimulus onset. The earliest effects were found at 70 ms for a component that originates from occipital areas, the contralateral P1. To determine whether the straight-ahead direction affects primary visual cortex responses, we performed an additional experiment (n = 29) specifically designed to generate two robust components, the C1 and C2, whose cortical origins are constrained within areas V1, V2 and V3. Our analyses confirmed all the results of the first experiment and also revealed that the C2 amplitude between 130 and 160 ms after stimulus onset was significantly stronger for straight-ahead stimuli. A frequency analysis of the pre-stimulus baseline revealed that gaze-driven alterations in the visual hemi-field containing the straight-ahead direction were associated with a decrease in alpha power in the contralateral hemisphere, suggesting the implication of specific neural modulations before stimulus onset. Altogether, our EEG data demonstrate that preferential responses to the straight-ahead direction can be detected in the visual cortex as early as about 70 ms after stimulus onset.


Subject(s)
Fixation, Ocular , Visual Cortex/physiology , Visual Fields/physiology , Visual Perception/physiology , Alpha Rhythm , Female , Humans , Male , Photic Stimulation , Visual Pathways/physiology
18.
Sci Rep ; 9(1): 9308, 2019 06 26.
Article in English | MEDLINE | ID: mdl-31243297

ABSTRACT

The borders between objects and their backgrounds create discontinuities in image feature maps that can be used to recover object shape. Here we used functional magnetic resonance imaging to identify cortical areas that encode two of the most important image segmentation cues: relative motion and relative disparity. Relative motion and disparity cues were isolated by defining a central 2-degree disk using random-dot kinematograms and stereograms, respectively. For motion, the disk elicited retinotopically organized activations starting in V1 and extending through V2 and V3. In the surrounding region, we observed phase-inverted activations indicative of suppression, extending out to at least 6 degrees of retinal eccentricity. For disparity, disk activations were only found in V3, while suppression was observed in all early visual areas. Outside of early visual cortex, several areas were sensitive to both types of cues, most notably LO1, LO2 and V3B, making them additional candidate areas for motion- and disparity-cue combination. Adding an orthogonal task at fixation did not diminish these effects, and in fact led to small but measurable disk activations in V1 and V2 for disparity. The overall pattern of extra-striate activations is consistent with recent three-stream models of cortical organization.


Subject(s)
Image Processing, Computer-Assisted/methods , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Adolescent , Adult , Biomechanical Phenomena , Brain/physiology , Brain Mapping/methods , Female , Healthy Volunteers , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Motion Perception , Photic Stimulation , Retina/physiology , Visual Pathways , Young Adult
19.
J Vis ; 19(4): 22, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30998832

ABSTRACT

Art experts have argued that the mirror reversal of pictorial artworks produces an alteration of their spatial content. However, this putative asymmetry of the pictorial space remains to be empirically proved and causally explained. Here, we address these issues with the "corridor illusion," a size illusion triggered by the pictorial space of a receding corridor. We show that mirror-reversed corridors-receding respectively leftward and rightward-induce markedly different illusion strengths and thus convey distinct pictorial spaces. Remarkably, the illusion is stronger with the rightward corridor among native left-to-right readers (French participants, n = 40 males) but conversely stronger with the leftward corridor among native right-to-left readers (Syrian participants, n = 40 males). Together, these results demonstrate an asymmetry of the pictorial space and point to our reading/writing habits as a major cause of this phenomenon.


Subject(s)
Functional Laterality/physiology , Illusions , Language , Reading , Adolescent , Adult , Humans , Male , Psychophysiology , Writing , Young Adult
20.
Neuropsychologia ; 125: 129-136, 2019 03 04.
Article in English | MEDLINE | ID: mdl-30721741

ABSTRACT

Visual crowding, the difficulty of recognizing elements when surrounded by similar items, is a widely studied perceptual phenomenon and a trademark characteristic of peripheral vision. Perceptual Learning (PL) has been shown to reduce crowding, although a large number of sessions is required to observe significant improvements. Recently, transcranial random noise stimulation (tRNS) has been successfully used to boost PL in low-level foveal tasks (e.g., contrast detection, orientation) in both healthy and clinical populations. However, no studies so far combined tRNS with PL in peripheral vision during higher-level tasks. Thus, we investigated the effect of tRNS on PL and transfer in peripheral high-level visual tasks. We trained two groups (tRNS and sham) of normal-sighted participants in a peripheral (8° of eccentricity) crowding task over a short number of sessions (4). We tested both learning and transfer to untrained spatial locations, orientations, and tasks (visual acuity). After training, the tRNS group showed greater learning rate with respect to the sham group. For both groups, learning generalized to the same extent to the untrained retinal location and task. Overall, this paradigm has potential applications for patients suffering from central vision loss but further research is needed to elucidate its effect (i.e., increasing transfer and learning retention).


Subject(s)
Learning/physiology , Occipital Lobe/physiology , Transcranial Direct Current Stimulation , Visual Perception/physiology , Adult , Female , Humans , Male , Sensory Thresholds , Visual Acuity , Visual Fields , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL