Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
2.
J Vis ; 22(4): 12, 2022 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-35323868

RESUMO

Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.


Assuntos
Fixação Ocular , Realidade Virtual , Movimentos Oculares , Humanos , Movimentos Sacádicos , Percepção Visual
3.
J Clin Med ; 11(2)2022 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-35054039

RESUMO

BACKGROUND: Transcranial Direct Current Stimulation (tDCS) and Virtual Reality Exposure Therapy (VRET) are individually increasingly used in psychiatric research. OBJECTIVE/HYPOTHESIS: Our study aimed to investigate the feasibility of combining tDCS and wireless 360° full immersive active and embodied VRET to reduce height-induced anxiety. METHODS: We carried out a pilot randomized, double-blind, controlled study associating VRET (two 20 min sessions with a 48 h interval, during which, participants had to cross a plank at rising heights in a building in construction) with online tDCS (targeting the ventromedial prefrontal cortex) in 28 participants. The primary outcomes were the sense of presence level and the tolerability. The secondary outcomes were the anxiety level (Subjective Unit of Discomfort) and the salivary cortisol concentration. RESULTS: We confirmed the feasibility of the association between tDCS and fully embodied VRET associated with a good sense of presence without noticeable adverse effects. In both groups, a significant reduction in the fear of height was observed after two sessions, with only a small effect size of add-on tDCS (0.1) according to the SUD. The variations of cortisol concentration differed in the tDCS and sham groups. CONCLUSION: Our study confirmed the feasibility of the association between wireless online tDCS and active, fully embodied VRET. The optimal tDCS paradigm remains to be determined in this context to increase effect size and then adequately power future clinical studies assessing synergies between both techniques.

4.
J Vis ; 19(14): 22, 2019 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-31868896

RESUMO

Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehabilitation therapies, and monitoring. In this experiment, 54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying radii in a gaze-contingent paradigm. We studied the importance of a set of gaze features as predictors to best differentiate between artificial scotoma conditions. Linear mixed models were utilized to measure differences between scotoma conditions. Correlation and factorial analyses revealed redundancies in our data. Finally, hidden Markov models and recurrent neural networks were implemented as classifiers in order to measure the predictive usefulness of gaze features. The results show separate saccade direction biases depending on scotoma type. We demonstrate that the saccade relative angle, amplitude, and peak velocity of saccades are the best features on the basis of which to distinguish between artificial scotomas in a free-viewing task. Finally, we discuss the usefulness of our protocol and analyses as a gaze-feature identifier tool that discriminates between artificial scotomas of different types and sizes.


Assuntos
Escotoma/fisiopatologia , Testes de Campo Visual/métodos , Campos Visuais , Adulto , Cegueira , Feminino , Glaucoma/fisiopatologia , Humanos , Degeneração Macular/fisiopatologia , Masculino , Cadeias de Markov , Pessoa de Meia-Idade , Redes Neurais de Computação , Movimentos Sacádicos , Transtornos da Visão , Adulto Jovem
5.
IEEE Trans Image Process ; 26(10): 4684-4696, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28678707

RESUMO

In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA