Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 30(5): 2228-2238, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38442067

RESUMO

With a demand for an immersive experience in virtual/augmented reality (VR/AR) displays, recent efforts have incorporated eye states, such as focus and fixation, into display graphics. Among these, ocular parallax, a small parallax generated by eye rotation, has received considerable attention for its impact on depth perception. However, the substantial latency of head-mounted displays (HMDs) has made it challenging to accurately assess its true effect during free eye movements. To address this issue, we propose a high-speed (360 Hz) and low-latency (4.8 ms) ocular parallax rendering system with a custom-built eye tracker. Using this proposed system, we conducted an investigation to determine the latency requirements necessary for achieving perceptually stable ocular parallax rendering. Our findings indicate that, in binocular viewing, ocular parallax rendering is perceived as significantly less stable than conventional rendering when the latency exceeds 43.72 ms at 1.3 D and 21.50 ms at 2.0 D. We also evaluated the effects of ocular parallax rendering on binocular fusion and monocular depth perception under free viewing conditions. The results demonstrated that ocular parallax rendering can enhance binocular fusion but has a limited impact on depth perception under monocular viewing conditions when latency is minimized.


Assuntos
Percepção de Movimento , Realidade Virtual , Disparidade Visual , Percepção de Profundidade , Gráficos por Computador
2.
iScience ; 26(12): 108307, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-38025782

RESUMO

The neural and computational mechanisms underlying visual motion perception have been extensively investigated over several decades, but little attempt has been made to measure and analyze, how human observers perceive the map of motion vectors, or optical flow, in complex naturalistic scenes. Here, we developed a psychophysical method to assess human-perceived motion flows using local vector matching and a flash probe. The estimated perceived flow for naturalistic movies agreed with the physically correct flow (ground truth) at many points, but also showed consistent deviations from the ground truth (flow illusions) at other points. Comparisons with the predictions of various computational models, including cutting-edge computer vision algorithms and coordinate transformation models, indicated that some flow illusions are attributable to lower-level factors such as spatiotemporal pooling and signal loss, while others reflect higher-level computations, including vector decomposition. Our study demonstrates a promising data-driven psychophysical paradigm for an advanced understanding of visual motion perception.

3.
Front Psychol ; 14: 1047694, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36874839

RESUMO

It has been suggested that perceiving blurry images in addition to sharp images contributes to the development of robust human visual processing. To computationally investigate the effect of exposure to blurry images, we trained convolutional neural networks (CNNs) on ImageNet object recognition with a variety of combinations of sharp and blurred images. In agreement with recent reports, mixed training on blurred and sharp images (B+S training) brings CNNs closer to humans with respect to robust object recognition against a change in image blur. B+S training also slightly reduces the texture bias of CNNs in recognition of shape-texture cue conflict images, but the effect is not strong enough to achieve human-level shape bias. Other tests also suggest that B+S training cannot produce robust human-like object recognition based on global configuration features. Using representational similarity analysis and zero-shot transfer learning, we also show that B+S-Net does not facilitate blur-robust object recognition through separate specialized sub-networks, one network for sharp images and another for blurry images, but through a single network analyzing image features common across sharp and blurry images. However, blur training alone does not automatically create a mechanism like the human brain in which sub-band information is integrated into a common representation. Our analysis suggests that experience with blurred images may help the human brain recognize objects in blurred images, but that alone does not lead to robust, human-like object recognition.

4.
IEEE Trans Vis Comput Graph ; 25(5): 2061-2071, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30794177

RESUMO

A recently developed light projection technique can add dynamic impressions to static real objects without changing their original visual attributes such as surface colors and textures. It produces illusory motion impressions in the projection target by projecting gray-scale motion-inducer patterns that selectively drive the motion detectors in the human visual system. Since a compelling illusory motion can be produced by an inducer pattern weaker than necessary to perfectly reproduce the shift of the original pattern on an object's surface, the technique works well under bright environmental light conditions. However, determining the best deformation sizes is often difficult: When users try to add a large deformation, the deviation in the projected patterns from the original surface pattern on the target object becomes apparent. Therefore, to obtain satisfactory results, they have to spend much time and effort to manually adjust the shift sizes. Here, to overcome this limitation, we propose an optimization framework that adaptively retargets the displacement vectors based on a perceptual model. The perceptual model predicts the subjective inconsistency between a projected pattern and an original one by simulating responses in the human visual system. The displacement vectors are adaptively optimized so that the projection effect is maximized within the tolerable range predicted by the model. We extensively evaluated the perceptual model and optimization method through a psychophysical experiment as well as user studies.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Realidade Virtual , Percepção Visual/fisiologia , Humanos , Luz , Psicofísica
5.
Annu Rev Vis Sci ; 4: 501-523, 2018 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-30052495

RESUMO

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.


Assuntos
Percepção de Movimento/fisiologia , Vias Visuais/fisiologia , Retroalimentação Sensorial/fisiologia , Humanos , Psicofísica , Transdução de Sinais/fisiologia , Percepção Espacial/fisiologia , Percepção do Tempo/fisiologia
6.
J Vis ; 14(5): 2, 2014 May 05.
Artigo em Inglês | MEDLINE | ID: mdl-24799621

RESUMO

Previous studies on perceptual transparency defined the photometric condition in which perceived depth ordering between two surfaces becomes ambiguous. Even under this bistable transparency condition, it is known that depth-order perceptions are often biased toward one specific interpretation (Beck, Prazdny, & Ivry, 1984; Delogu, Fedorov, Belardinelli, & van Leeuwen, 2010; Kitaoka, 2005; Oyama & Nakahara, 1960). In this study, we examined what determines the perceived depth ordering for bistable transparency patterns using stimuli that simulated two partially overlapping disks resulting in four regions: a (background), b (portion of right disk), p (portion of left disk), and q (shared region). In contrast to the previous theory that proposed contributions of contrast against the background region (i.e., contrast at contour b/a and contrast at contour p/a) to perceived depth order in bistable transparency patterns, the present study demonstrated that contrast against the background region has little influence on perceived depth order compared with contrast against the shared region (i.e., contrast at contour b/q and contrast at contour p/q). In addition, we found that the perceived depth ordering is well predicted by a simpler model that takes into consideration only relative size of lightness difference against the shared region. Specifically, the probability that the left disk is perceived as being in front is proportional to (|b - q| - |p - q|) / (|b - q| + |p - q|) calculated based on lightness.


Assuntos
Sensibilidades de Contraste/fisiologia , Percepção de Profundidade/fisiologia , Luz , Reconhecimento Visual de Modelos/fisiologia , Adulto , Humanos , Julgamento/fisiologia , Modelos Neurológicos , Adulto Jovem
7.
J Vis ; 13(2): 7, 2013 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-23390321

RESUMO

Visual motion can influence the perceived position of an object. For example, in the flash-drag effect, the position of a stationary flashed object at one location appears to shift in the direction of motion presented at another location in the visual field (Whitney & Cavanagh, 2000). The results of previous physiological studies suggest interactions between motion and position information in very early retinotopic areas. However, it is unclear whether the position information that has been distorted by motion further influences the visual processing stage at which adaptable position mechanisms may exist. To examine this, we presented two Gabor patches, each of which was adjacent to oppositely moving inducers, and investigated whether adaptation to the illusory spatial offset caused by the flash-drag effect induced the position aftereffect. Our results show that a change in the perceived offset in the presence of the flash-drag effect did not influence the position aftereffect. These results indicate that internal representations of positions altered by the presence of nearby motion signals do not feed into the mechanism underlying the position aftereffect.


Assuntos
Adaptação Fisiológica/fisiologia , Atenção/fisiologia , Percepção de Movimento/fisiologia , Ilusões Ópticas/fisiologia , Campos Visuais , Adolescente , Adulto , Feminino , Humanos , Movimento (Física) , Estimulação Luminosa/métodos , Psicofísica , Adulto Jovem
8.
J Vis ; 11(13)2011 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-22080448

RESUMO

The flash-drag (FDE) effect refers to the phenomenon in which the position of a stationary flashed object in one location appears shifted in the direction of nearby motion. Over the past decade, it has been debated how bottom-up and top-down processes contribute to this illusion. In this study, we demonstrate that randomly phase-shifting gratings can produce the FDE. In the random motion sequence we used, the FDE inducer (a sinusoidal grating) jumped to a random phase every 125 ms and stood still until the next jump. Because this random sequence could not be tracked attentively, it was impossible for the observer to discern the jump direction at the time of the flash. By sorting the data based on the flash's onset time relative to each jump time in the random motion sequence, we found that a large FDE with a broad temporal tuning occurred around 50 to 150 ms before the jump and that this effect was not correlated with any other jumps in the past or future. These results suggest that as few as two frames of unpredictable apparent motion can preattentively cause the FDE with a broad temporal tuning.


Assuntos
Percepção de Movimento/fisiologia , Ilusões Ópticas/fisiologia , Estimulação Luminosa/métodos , Percepção Espacial/fisiologia , Inconsciente Psicológico , Adolescente , Atenção/fisiologia , Conscientização/fisiologia , Humanos , Julgamento/fisiologia , Modelos Neurológicos , Adulto Jovem
9.
Vision Res ; 50(19): 1949-56, 2010 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-20624412

RESUMO

The flash-lag effect refers to the phenomenon where a flash of a stationary stimulus presented adjacent to a moving stimulus appears to lag behind it. We investigated whether the flash-lag effect affected the tilt aftereffect using two sets of vertical gratings for a flash and a moving stimulus that created a specific orientation when aligned with a specific temporal offset. Our results show that a change in the perceptual appearance of stimuli in the presence of the flash-lag effect had a negligible influence on the tilt aftereffect. These data suggest that the flash-lag effect originates at a different neural processing stage than the early linear processing that presumably mediates the tilt aftereffect.


Assuntos
Percepção de Movimento/fisiologia , Ilusões Ópticas/fisiologia , Adolescente , Adulto , Humanos , Orientação , Estimulação Luminosa/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...