Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Opt Express ; 28(20): 29788-29804, 2020 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-33114870

RESUMO

We explore the feasibility of implementing stereoscopy-based 3D images with an eye-tracking-based light-field display and actual head-up display optics for automotive applications. We translate the driver's eye position into the virtual eyebox plane via a "light-weight" equation to replace the actual optics with an effective lens model, and we implement a light-field rendering algorithm using the model-processed eye-tracking data. Furthermore, our experimental results with a prototype closely match our ray-tracing simulations in terms of designed viewing conditions and low-crosstalk margin width. The prototype successfully delivers virtual images with a field of view of 10° × 5° and static crosstalk of <1.5%.

2.
Opt Express ; 26(16): 20233, 2018 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-30119336

RESUMO

In this paper we present an autostereoscopic 3D display using a directional subpixel rendering algorithm in which clear left-right images are expressed in real time based on a viewer's 3D eye positions. In order to maintain the 3D image quality over a wide viewing range, we designed an optical layer that generates a uniformly distributed light field. The proposed 3D rendering method is simple, and each pixel processing can be performed independently in parallel computing environments. To prove the effectiveness of our display system, we implemented 31.5" 3D monitor and 10.1" 3D tablet prototypes in which the 3D rendering is processed in the GPU and FPGA board, respectively.

3.
Opt Lett ; 39(1): 166-9, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24365849

RESUMO

We present a method to enhance depth quality of a time-of-flight (ToF) camera without additional devices or hardware modifications. By controlling the turn-off patterns of the LEDs of the camera, we obtain depth and normal maps simultaneously. Sixteen subphase images are acquired with variations in gate-pulse timing and light emission pattern of the camera. The subphase images allow us to obtain a normal map, which are combined with depth maps for improved depth details. These details typically cannot be captured by conventional ToF cameras. By the proposed method, the average of absolute differences between the measured and laser-scanned depth maps has decreased from 4.57 to 3.77 mm.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA