Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 14: 1180561, 2023.
Article in English | MEDLINE | ID: mdl-37663341

ABSTRACT

Our brain employs mechanisms to adapt to changing visual conditions. In addition to natural changes in our physiology and those in the environment, our brain is also capable of adapting to "unnatural" changes, such as inverted visual-inputs generated by inverting prisms. In this study, we examined the brain's capability to adapt to hyperspaces. We generated four spatial-dimensional stimuli in virtual reality and tested the ability to distinguish between rigid and non-rigid motion. We found that observers are able to differentiate rigid and non-rigid motion of hypercubes (4D) with a performance comparable to that obtained using cubes (3D). Moreover, observers' performance improved when they were provided with more immersive 3D experience but remained robust against increasing shape variations. At this juncture, we characterize our findings as "3 1/2 D perception" since, while we show the ability to extract and use 4D information, we do not have yet evidence of a complete phenomenal 4D experience.

2.
J Vis ; 15(15): 13, 2015.
Article in English | MEDLINE | ID: mdl-26605842

ABSTRACT

Simultaneously presented visual events lead to temporally asynchronous percepts. This has led some researchers to conclude that the asynchronous experience is a manifestation of differences in neural processing time for different visual attributes. Others, however, have suggested that the asynchronous experience is due to differences in temporal markers for changes of different visual attributes. Here, two sets of bars were presented, one to each eye. Either the bars were moving or their luminance was gradually changing. Bars moved horizontally in counterphase at low frequencies along short trajectories and were presented stereoscopically, such that the horizontal movements were perceived as back-and-forth motion on a sagittal plane, or monocularly to a dominant eye, preserving a perception of the horizontal movements on a frontal plane. In a control condition, bars were stationary and their luminance was modulated. The changes in stimulus speed or luminance occurred sinusoidally. When asked to adjust the phase of one stimulus to the other to achieve synchronous perception, participants showed a constant phase offset at the lowest frequencies used. Given the absence of abrupt transitions and the presence of similar gradual turning points in our stimuli to control for attentional effects, it can be concluded that asynchronous percepts in multimodal stimuli may at least in part be a manifestation of difference in neural processing time of visual attributes rather than solely a difference in the temporal markers (transitions versus turning points).


Subject(s)
Depth Perception/physiology , Light , Visual Perception/physiology , Adult , Cues , Female , Humans , Male , Motion Perception/physiology
3.
J Vis ; 9(9): 15.1-6, 2009 Aug 28.
Article in English | MEDLINE | ID: mdl-19761348

ABSTRACT

Enhancement in perceptual learning of a visual stimulus can often be explained either by learning of integrated visual information that is processed in higher visual areas or by learning of component information that is processed in lower visual areas. It is not clear on which visual information perceptual learning is predominantly based. We examined whether perceptual learning of global pattern motion occurs on the basis of local or global motion as a result of performance improvement in detecting contraction (or expansion) in a display in which contracting (or expanding) dots slightly outnumbers expanding (or contracting) dots. We measured the degree of transfer of the learning effect by presenting test stimuli spatially shifted so that the region of the test stimuli partially overlapped the trained region. The results showed that the degree of transfer was entirely dependent on how similar local motion directions in the test stimuli are to those in the trained stimulus in the overlapping area, irrespective of whether a test stimulus contained the same global motion direction as the trained or not. These results indicate that perceptual learning at least in the present setting occurs on the basis of local motion signals.


Subject(s)
Learning/physiology , Models, Neurological , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Humans , Photic Stimulation/methods , Sensory Thresholds/physiology
4.
Neuroimage ; 42(4): 1397-413, 2008 Oct 01.
Article in English | MEDLINE | ID: mdl-18620066

ABSTRACT

A hierarchical Bayesian method estimated current sources from MEG data, incorporating an fMRI constraint as a hierarchical prior whose strength is controlled by hyperparameters. A previous study [Sato, M., Yoshioka, T., Kajihara, S., Toyama, K., Goda, N., Doya, K., Kawato, M., 2004. Hierarchical Bayesian estimation for MEG inverse problem. Neuroimage 23, 806-826] demonstrated that fMRI information improves the localization accuracy for simulated data. The goal of the present study is to confirm the usefulness of the hierarchical Bayesian method by the real MEG and fMRI experiments using visual stimuli with a fan-shaped checkerboard pattern presented in four visual quadrants. The proper range of hyperparameters was systematically analyzed using goodness of estimate measures for the estimated currents. The robustness with respect to false-positive activities in the fMRI information was also evaluated by using noisy priors constructed by adding artificial noises to real fMRI signals. It was shown that with appropriate hyperparameter values, the retinotopic organization and temporal dynamics in the early visual area were reconstructed, which were in a close correspondence with the known brain imaging and electrophysiology of the humans and monkeys. The false-positive effects of the noisy priors were suppressed by using appropriate hyperparameter values. The hierarchical Bayesian method also was capable of reconstructing retinotopic sequential activation in V1 with fine spatiotemporal resolution, from MEG data elicited by sequential stimulation of the four visual quadrants with the fan-shaped checker board pattern at much shorter intervals (150 and 400 ms) than the temporal resolution of fMRI. These results indicate the potential capability for the hierarchical Bayesian method combining MEG with fMRI to improve the spatiotemporal resolution of noninvasive brain activity measurement.


Subject(s)
Algorithms , Brain Mapping/methods , Evoked Potentials, Visual/physiology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Magnetoencephalography/methods , Visual Cortex/physiology , Bayes Theorem , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
5.
J Vis ; 7(13): 2.1-10, 2007 Oct 12.
Article in English | MEDLINE | ID: mdl-17997630

ABSTRACT

It was previously shown that sensitivity improvements to a task-irrelevant motion direction can be obtained when it is presented in concurrence with observers' performance of an attended task (A. R. Seitz & T. Watanabe, 2003; T. Watanabe, J. E. Náñez, & Y. Sasaki, 2001). To test whether this task-irrelevant perceptual learning (TIPL) is specific for motion and to clarify the relationships between the observer's task and the resultant TIPL, we investigated the spatial profile of the sensitivity enhancement for a static task-irrelevant feature. During the training period, participants performed an attentionally demanding character identification task at one location, whereas subthreshold, static, Gabor patches, which were masked in noise, were presented at different locations in the visual field. Subjects' sensitivity to the Gabors was compared between the pre- and posttraining tests. First, we found that TIPL extends to learning of static visual stimuli. Thus, TIPL is not a specialized process to motion stimuli. As to the effect of spatial location, the largest improvement was found for the Gabors presented in closest proximity to the task. These data indicate that the learning of the task-irrelevant visual feature significantly depends on the task location, with a gradual attenuation according to the spatial distance between them. These findings give further insights into the mechanism of perceptual learning.


Subject(s)
Attention , Learning , Motion Perception/physiology , Photic Stimulation/methods , Space Perception/physiology , Discrimination, Psychological , Humans , Noise , Orientation , Perceptual Masking , Visual Fields
6.
J Opt Soc Am A Opt Image Sci Vis ; 24(4): 905-10, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17361277

ABSTRACT

We investigate the spatiotemporal dynamics of depth filling in on an illusory surface by measuring the temporal asynchrony of perceived depth between an illusory neon-colored surface and real contours. We temporally modulated the horizontal disparity at vertical edges of the illusory surface and measured the perceptual delay for the interpolated surface's depth under two different boundary conditions: disparity given at both sides, or disparity given at one side and a free boundary at the other side. The results showed that the amount of the delay depends on the spatial distance between the measured point and the edges where disparity was physically given. Importantly, the observed delay as a function of spatial distance was clearly different under the two boundary conditions. We found that this difference can be fairly well explained by a model based on a diffusion equation under different boundary conditions. These results support the existence of locally represented depth information and an interpolation process based on mutual interaction of this information.


Subject(s)
Algorithms , Depth Perception/physiology , Image Interpretation, Computer-Assisted/methods , Models, Biological , Optical Illusions/physiology , Computer Simulation , Humans
7.
Neural Netw ; 17(2): 159-63, 2004 Mar.
Article in English | MEDLINE | ID: mdl-15036334

ABSTRACT

We present a computational model based on the heat conduction equation, which can well explain human performance of depth interpolation. The model assumes that the depth information is locally represented and spatial integration is made by iterative processing of mutual interaction of neighbors. It reconstructs a dynamically transforming surface which is in good agreement with the results of psychophysical experiments on depth perception of untextured (uniform-colored) surface moving in depth. The model can also explain a temporal-frequency property of human percept. We conclude that the local ambiguity, which is quite common in everyday visual scenes, is solved by an interpolation mechanism based on iterative local interaction of locally represented visual information.


Subject(s)
Depth Perception , Models, Neurological , Depth Perception/physiology , Time Factors
8.
Neuroreport ; 14(14): 1767-71, 2003 Oct 06.
Article in English | MEDLINE | ID: mdl-14534417

ABSTRACT

The aperture problem is defined as one of integrating motion information from inside and outside of the aperture, and determination of the true direction of motion of a line. Much is known about it and many models have been proposed for its neural mechanisms. However, it is still a matter of debate whether the brain solves the problem by using only feed-forward neural connections, also known as the one-shot algorithm, or by using the iterative algorithm while utilizing feedback as well as horizontal neural connections. Here we show unequivocal evidence for the latter model. The model was tested using critically designed psychophysical experiments and the results were perfectly in line with the psychophysical performance of the observers.


Subject(s)
Models, Neurological , Motion Perception/physiology , Neural Networks, Computer , Optical Illusions , Visual Pathways/physiology , Algorithms , Animals , Computer Simulation , Humans , Neurons/physiology , Orientation , Photic Stimulation , Psychophysics/methods , Time Factors
9.
Vision Res ; 43(24): 2493-503, 2003 Nov.
Article in English | MEDLINE | ID: mdl-13129537

ABSTRACT

The depth of each point on a binocularly presented untextured horizontal bar is physically ambiguous except for the two vertical edges at both ends, since the correspondence between left and right images is not unique on such a uniform region. These depths, however, are unambiguously perceived, and this suggests the existence of some mechanism that interpolates the depth information from the two ends toward the center. Temporal properties of this integration process were examined by a phase-matching task, which allowed us to measure the phase of the perceived depth at the center of a horizontal bar when disparities at the ends were sinusoidally oscillated. We found that the perceived depth at the center of the bar was slightly temporally delayed for 7-60 ms relative to the physical depth at the ends. The difference increased with the length of the bar, decreased as the vertical position of the bar became farther from the fixation point, and increased in the presence of occluders. This finding indicates that depth information is propagated over an object to solve this ambiguity by using a time-consuming process. Accordingly, we suggest that depth propagation is accomplished by spatially local diffusion-like interactions of locally represented depth information.


Subject(s)
Depth Perception/physiology , Vision Disparity/physiology , Analysis of Variance , Convergence, Ocular/physiology , Humans , Photic Stimulation/methods , Vision, Binocular/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...