Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(13)2021 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-34283132

RESUMO

To create a realistic 3D perception on glasses-free displays, it is critical to support continuous motion parallax, greater depths of field, and wider fields of view. A new type of Layered or Tensor light field 3D display has attracted greater attention these days. Using only a few light-attenuating pixelized layers (e.g., LCD panels), it supports many views from different viewing directions that can be displayed simultaneously with a high resolution. This paper presents a novel flexible scheme for efficient layer-based representation and lossy compression of light fields on layered displays. The proposed scheme learns stacked multiplicative layers optimized using a convolutional neural network (CNN). The intrinsic redundancy in light field data is efficiently removed by analyzing the hidden low-rank structure of multiplicative layers on a Krylov subspace. Factorization derived from Block Krylov singular value decomposition (BK-SVD) exploits the spatial correlation in layer patterns for multiplicative layers with varying low ranks. Further, encoding with HEVC eliminates inter-frame and intra-frame redundancies in the low-rank approximated representation of layers and improves the compression efficiency. The scheme is flexible to realize multiple bitrates at the decoder by adjusting the ranks of BK-SVD representation and HEVC quantization. Thus, it would complement the generality and flexibility of a data-driven CNN-based method for coding with multiple bitrates within a single training framework for practical display applications. Extensive experiments demonstrate that the proposed coding scheme achieves substantial bitrate savings compared with pseudo-sequence-based light field compression approaches and state-of-the-art JPEG and HEVC coders.

2.
Angew Chem Int Ed Engl ; 59(22): 8416-8420, 2020 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-32182398

RESUMO

A proof-of-principle prototype of a volumetric 3D-displaying system is demonstrated by utilizing the photo-activated phosphorescence of two long-lived phosphorescent metal-porphyrins in dimethyl sulfoxide (DMSO), a photochemically deoxygenating solvent. The first phosphorescent sensitizer, Pt(TPBP), absorbs a light beam with a wavelength of 635 nm, and the sensitized singlet oxygen is scavenged by DMSO. The second phosphorescent emitter, Pt(OEP), absorbs a light beam with a wavelength of 532 nm and visibly phosphoresces only in the deoxygenated zone generated by the first sensitizer. The phosphorescent voxels, 3D images, and animations are well-defined by the intersections of the 635-nm and 532-nm light beams that are programmable by tuning of the excitation-power densities, the beam shapes, and the kinetics. As a pivotal selection rule for the phosphorescent molecular couple used in this 3D-displaying system, their absorptions and emissions must be orthogonal to each other, so that they can be excited and addressed independently.

3.
Curr Top Behav Neurosci ; 65: 131-159, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36723780

RESUMO

Virtual reality (VR) allows us to create visual stimuli that are both immersive and reactive. VR provides many new opportunities in vision science. In particular, it allows us to present wide field-of-view, immersive visual stimuli; for observers to actively explore the environments that we create; and for us to understand how visual information is used in the control of behaviour. In contrast with traditional psychophysical experiments, VR provides much greater flexibility in creating environments and tasks that are more closely aligned with our everyday experience. These benefits of VR are of particular value in developing our theories of the behavioural goals of the visual system and explaining how visual information is processed to achieve these goals. The use of VR in vision science presents a number of technical challenges, relating to how the available software and hardware limit our ability to accurately specify the visual information that defines our virtual environments and the interpretation of data gathered in experiments with a freely moving observer in a responsive environment.


Assuntos
Realidade Virtual , Visão Ocular , Optometria , Humanos , Oftalmologia
4.
Adv Mater ; 33(37): e2104418, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34337797

RESUMO

3D laser displays play an important role in next-generation display technologies owing to the ultimate visual experience they provide. Circularly polarized (CP) laser emissions, featuring optical rotatory power and invariability under rotations, are attractive for 3D displays due to potential in enhancing contrast ratio and comfortability. However, the lack of pixelated self-emissive CP microlaser arrays as display panels hinders the implementation of 3D laser displays. Here, full-color 3D laser displays are demonstrated based on CP lasing with inkjet-printed cholesteric liquid crystal (CLC) arrays as display panels. Individual CP lasers are realized by embedding fluorescent dyes into CLCs with their left-/right-handed helical superstructures serving as distributed feedback microcavities, bringing in ultrahigh circular polarization degree values (gem  = 1.6). These CP microlaser pixels exhibit excellent far-field color-rendering features and a relatively large color gamut for high-fidelity displays. With these printed CLC red-green-blue (RGB) microlaser arrays serving as display panels, proof-of-concept full-color 3D laser displays are demonstrated via delivering images with orthogonal CP laser emissions into one's left and right eyes. These results provide valuable enlightenment for the development of 3D laser displays.

5.
Acta Ophthalmol ; 97(3): e435-e441, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-29696801

RESUMO

PURPOSE: In this article, we develop a dynamic Bayesian network (DBN) model to measure 3D visual fatigue. As far as our information goes, this is the first adaptation of a DBN structure-based probabilistic framework for inferring the 3D viewer's state of visual fatigue. METHODS: Our measurement focuses on the interdependencies between each factor and the phenomena of visual fatigue in stereoscopy. Specifically, the implementation of DBN with using multiple features (e.g. contextual, contactless and contact physiological features) and dynamic factor provides a systematic scheme to evaluate 3D visual fatigue. RESULTS: In contrast to measurement results between the mean opinion score (MOS) and Bayesian network model (with static Bayesian network and DBN), the visual fatigue in stereoscopy at time slice t is influenced by a dynamic factor (time slice t-1). In the presence of dynamic factors (time slice t-1), our proposed measuring scheme based on DBN is more comprehensive. CONCLUSION: (i) We cover more features for inferring the visual fatigue, more reliably and accurately; (ii) at different time slices, the dynamic factor features are significant for inferring the visual fatigue state of stereoscopy.


Assuntos
Algoritmos , Astenopia/diagnóstico , Teorema de Bayes , Percepção de Profundidade/fisiologia , Modelos Estatísticos , Astenopia/fisiopatologia , Humanos
6.
R Soc Open Sci ; 2(7): 140522, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26587261

RESUMO

Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4-82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination.

7.
ACM Trans Graph ; 2008: 23-32, 2008 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24683290

RESUMO

3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA