Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
1.
Opt Express ; 32(6): 9857-9866, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38571210

RESUMO

The three-dimensional (3D) light field display (LFD) with dense views can provide smooth motion parallax for the human eye. Increasing the number of views will widen the lens pitch, however, resulting in a decrease in view resolution. In this paper, an approach to smooth motion parallax based on optimizing the divergence angle of the light beam (DALB) for 3D LFD with narrow pitch is proposed. DALB is controlled by lens design. A views-fitting optimization algorithm is established based on a mathematical model between DALB and view distribution. Subsequently, the lens is reversely designed based on the optimization results. A co-designed convolutional neural network (CNN) is used to implement the algorithm. The optical experiment shows that a smooth motion parallax 3D image is achievable through the proposed method.

2.
Opt Express ; 32(7): 11296-11306, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38570980

RESUMO

Tabletop three-dimensional light field display is a kind of compelling display technology that can simultaneously provide stereoscopic vision for multiple viewers surrounding the lateral side of the device. However, if the flat panel light field display device is simply placed horizontally and displayed directly above, the visual frustum will be tilted and the 3D content outside the display panel will be invisible, the large oblique viewing angle will also lead to serious aberrations. In this paper, we demonstrate what we believe to be a new vertical spliced light field cave display system with an extended depth content. A separate optimization of different compound lens array attenuates the aberration from different oblique viewing angles, and a local heating fitting method is implemented to ensure the accuracy of fabrication process. The image coding method and the correction of the multiple viewpoints realize the correct construction of spliced voxels. In the experiment, a high-definition and precisely spliced 3D city terrain scene is demonstrated on the prototype with a correct oblique perspective in 100-degree horizontal viewing range. We envision that our research will provide more inspiration for future immersive large-scale glass-free virtual reality display technologies.

3.
Opt Express ; 31(11): 18017-18025, 2023 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-37381520

RESUMO

Image visual quality is of fundamental importance for three-dimensional (3D) light-field displays. The pixels of a light-field display are enlarged after the imaging of the light-field system, increasing the graininess of the image, which leads to a severe decline in the image edge smoothness as well as image quality. In this paper, a joint optimization method is proposed to minimize the "sawtooth edge" phenomenon of reconstructed images in light-field display systems. In the joint optimization scheme, neural networks are used to simultaneously optimize the point spread functions of the optical components and elemental images, and the optical components are designed based on the results. The simulations and experimental data show that a less grainy 3D image is achievable through the proposed joint edge smoothing method.

4.
Opt Express ; 31(2): 1125-1140, 2023 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-36785154

RESUMO

Real-time dense view synthesis based on three-dimensional (3D) reconstruction of real scenes is still a challenge for 3D light-field display. It's time-consuming to reconstruct an entire model, and then the target views are synthesized afterward based on volume rendering. To address this issue, Light-field Visual Hull (LVH) is presented with free-viewpoint texture mapping for 3D light-field display, which can directly produce synthetic images with the 3D reconstruction of real scenes in real-time based on forty free-viewpoint RGB cameras. An end-to-end subpixel calculation procedure of the synthetic image is demonstrated, which defines a rendering ray for each subpixel based on light-field image coding. In the ray propagation process, only the essential spatial point of the target model is located for the corresponding subpixel by projecting the frontmost point of the ray to all the free-viewpoints, and the color of each subpixel is identified in one pass. A dynamic free-viewpoint texture mapping method is proposed to solve the correct graphic texture considering the free-viewpoint cameras. To improve the efficiency, only the visible 3D position and texture that contributes to the synthetic image are calculated based on backward ray tracing rather than computing the entire 3D model and generating all elemental images. In addition, an incremental calibration method by dividing camera groups is proposed to satisfy the accuracy. Experimental results show the validity of our method. All the rendered views are analyzed for justifying the texture mapping method, and the PSNR is improved by an average of 11.88dB. Finally, LVH can achieve a natural and smooth viewing effect at 4K resolution and the frame rate of 25 ∼ 30fps with a large viewing angle.

5.
Opt Express ; 31(18): 29664-29675, 2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37710762

RESUMO

With the development of three-dimensional (3D) light-field display technology, 3D scenes with correct location information and depth information can be perceived without wearing any external device. Only 2D stylized portrait images can be generated with traditional portrait stylization methods and it is difficult to produce high-quality stylized portrait content for 3D light-field displays. 3D light-field displays require the generation of content with accurate depth and spatial information, which is not achievable with 2D images alone. New and innovative portrait stylization techniques methods should be presented to meet the requirements of 3D light-field displays. A portrait stylization method for 3D light-field displays is proposed, which maintain the consistency of dense views in light-field display when the 3D stylized portrait is generated. Example-based portrait stylization method is used to migrate the designated style image to the portrait image, which can prevent the loss of contour information in 3D light-field portraits. To minimize the diversity in color information and further constrain the contour details of portraits, the Laplacian loss function is introduced in the pre-trained deep learning model. The three-dimensional representation of the stylized portrait scene is reconstructed, and the stylized 3D light field image of the portrait is generated the mask guide based light-field coding method. Experimental results demonstrate the effectiveness of the proposed method, which can use the real portrait photos to generate high quality 3D light-field portrait content.

6.
Opt Express ; 31(20): 32273-32286, 2023 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-37859034

RESUMO

Tabletop light field displays are compelling display technologies that offer stereoscopic vision and can present annular viewpoint distributions to multiple viewers around the display device. When employing the lens array to realize the of integral imaging tabletop light field display, there is a critical trade-off between the increase of the angular resolution and the spatial resolution. Moreover, as the viewers are around the device, the central viewing range of the reconstructed 3D images are wasteful. In this paper, we explore what we believe to be a new method for realizing tabletop flat-panel light field displays to improve the efficiency of the pixel utilization and the angular resolution of the tabletop 3D display. A 360-degree directional micro prism array is newly designed to refract the collimated light rays to different viewing positions and form viewpoints, then a uniform 360-degree annular viewpoint distribution can be accurately formed. In the experiment, a micro prism array sample is fabricated to verify the performance of the proposed tabletop flat-panel light field display system. One hundred viewpoints are uniformly distributed in the 360-degree viewing area, providing a full-color, smooth parallax 3D scene.

7.
Opt Express ; 31(12): 20505-20517, 2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37381444

RESUMO

A true-color light-field display system with a large depth-of-field (DOF) is demonstrated. Reducing crosstalk between viewpoints and increasing viewpoint density are the key points to realize light-field display system with large DOF. The aliasing and crosstalk of light beams in the light control unit (LCU) are reduced by adopting collimated backlight and reversely placing the aspheric cylindrical lens array (ACLA). The one-dimensional (1D) light-field encoding of halftone images increases the number of controllable beams within the LCU and improves viewpoint density. The use of 1D light-field encoding leads to a decrease in the color-depth of the light-field display system. The joint modulation for size and arrangement of halftone dots (JMSAHD) is used to increase color-depth. In the experiment, a three-dimensional (3D) model was constructed using halftone images generated by JMSAHD, and a light-field display system with a viewpoint density of 1.45 (i.e. 1.45 viewpoints per degree of view) and a DOF of 50 cm was achieved at a 100 ° viewing angle.

8.
Appl Opt ; 62(16): E83-E91, 2023 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-37706893

RESUMO

In this paper, a photonic crystal fiber (PCF) sensor based on the surface plasmon resonance (SPR) effect for refractive index (RI) detection is proposed. We design a D-shaped polished PCF structure consisting of air holes arranged in a hexagonal lattice. The silver film is coated on the middle channel of the polished surface of the PCF. The finite element method is used to analyze the propagation characteristics of the proposed D-shaped SPR-PCF sensor. Simulation results show that the proposed D-shaped SPR-PCF sensor has a maximum wavelength sensitivity of 30,000 nm/RIU, an average wavelength sensitivity of 6785.71 nm/RIU, and a maximum resolution of 3.33×10-6 R I U in the RI range of 1.22-1.36. Owing to the high wavelength sensitivity in the considered RI range, the proposed D-shaped SPR-PCF sensor is suitable for applications in water contamination detection, liquid concentration measurement, food safety monitoring, etc.

9.
Opt Express ; 30(22): 40087-40100, 2022 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-36298947

RESUMO

Holographic display is an ideal technology for near-eye display to realize virtual and augmented reality applications, because it can provide all depth perception cues. However, depth performance is sacrificed by exiting computer-generated hologram (CGH) methods for real-time calculation. In this paper, volume representation and improved ray tracing algorithm are proposed for real-time CGH generation with enhanced depth performance. Using the single fast Fourier transform (S-FFT) method, the volume representation enables a low calculation burden and is efficient for Graphics Processing Unit (GPU) to implement diffraction calculation. The improved ray tracing algorithm accounts for accurate depth cues in complex 3D scenes with reflection and refraction, which is represented by adding extra shapes in the volume. Numerical evaluation is used to verify the depth precision. And experiments show that the proposed method can provide a real-time interactive holographic display with accurate depth precision and a large depth range. CGH of a 3D scene with 256 depth values is calculated at 30fps, and the depth range can be hundreds of millimeters. Depth cues of reflection and refraction images can also be reconstructed correctly. The proposed method significantly outperforms existing fast methods by achieving a more realistic 3D holographic display with ideal depth performance and real-time calculation at the same time.

10.
Opt Express ; 30(10): 17577-17590, 2022 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-36221577

RESUMO

Accurate, fast, and reliable modeling and optimization methods play a crucial role in designing light field display (LFD) system. Here, an automatic co-design method of LFD system based on simulated annealing and visual simulation is proposed. The process of LFD content acquisition and optical reconstruction are modeled and simulated, the objective function for evaluating the display effect of the LFD system is established according to the simulation results. In case of maximum objective function, the simulated annealing optimization method is used to find the optimal parameters of the LFD system. The validity of the proposed method is confirmed through optical experiments.

11.
Opt Express ; 30(24): 44201-44217, 2022 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-36523100

RESUMO

Three-dimensional (3D) light-field displays can provide an immersive visual experience, which has attracted significant attention. However, the generating of high-quality 3D light-field content in the real world is still a challenge because it is difficult to capture dense high-resolution viewpoints of the real world with the camera array. Novel view synthesis based on CNN can generate dense high-resolution viewpoints from sparse inputs but suffer from high-computational resource consumption, low rendering speed, and limited camera baseline. Here, a two-stage virtual view synthesis method based on cutoff-NeRF and 3D voxel rendering is presented, which can fast synthesize dense novel views with smooth parallax and 3D images with a resolution of 7680 × 4320 for the 3D light-field display. In the first stage, an image-based cutoff-NeRF is proposed to implicitly represent the distribution of scene content and improve the quality of the virtual view. In the second stage, a 3D voxel-based image rendering and coding algorithm is presented, which quantify the scene content distribution learned by cutoff-NeRF to render high-resolution virtual views fast and output high-resolution 3D images. Among them, a coarse-to-fine 3D voxel rendering method is proposed to improve the accuracy of voxel representation effectively. Furthermore, a 3D voxel-based off-axis pixel encoding method is proposed to speed up 3D image generation. Finally, a sparse views dataset is built by ourselves to analyze the effectiveness of the proposed method. Experimental results demonstrate the method's effectiveness, which can fast synthesize novel views and 3D images with high resolution in real 3D scenes and physical simulation environments. PSNR of the virtual view is about 29.75 dB, SSIM is about 0.88, and the synthetic 8K 3D image time is about 14.41s. We believe that our fast high-resolution virtual viewpoint synthesis method can effectively improve the application of 3D light field display.

12.
Opt Express ; 30(12): 22260-22276, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-36224928

RESUMO

Three-Dimensional (3D) light-field display has achieved promising improvement in recent years. However, since the dense-view images cannot be collected fast in real-world 3D scenes, the real-time 3D light-field display is still challenging to achieve in real scenes, especially at the high-resolution 3D display. Here, a real-time 3D light-field display method with dense-view is proposed based on image color correction and self-supervised optical flow estimation, and a high-quality and high frame rate of 3D light-field display can be realized simultaneously. A sparse camera array is firstly used to capture sparse-view images in the proposed method. To eliminate the color deviation of the sparse views, the imaging process of the camera is analyzed, and a practical multi-layer perception (MLP) network is proposed to perform color calibration. Given sparse views with consistent color, the optical flow can be estimated by a lightweight convolutional neural network (CNN) at high speed, which uses the input image pairs to learn the optical flow in a self-supervised manner. With inverse warp operation, dense-view images can be synthesized in the end. Quantitative and qualitative experiments are performed to evaluate the feasibility of the proposed method. Experimental results show that over 60 dense-view images at a resolution of 1024 × 512 can be generated with 11 input views at a frame rate over 20 fps, which is 4× faster than previous optical flow estimation methods PWC-Net and LiteFlowNet3. Finally, large viewing angles and high-quality 3D light-field display at 3840 × 2160 resolution can be achieved in real-time.

13.
J Opt Soc Am A Opt Image Sci Vis ; 39(12): 2131-2141, 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36520728

RESUMO

Light field (LF) image super-resolution (SR) can improve the limited spatial resolution of LF images by using complementary information from different perspectives. However, current LF image SR methods only use the RGB data to implicitly exploit the information among different perspectives, without paying attention to the information loss from raw data to RGB data and the explicit structure information utilization. To address the first issue, a data generation pipeline is developed to collect LF raw data for LF image SR. In addition, to make full use of the multiview information, an end-to-end convolutional neural network architecture (namely, LF-RawSR) is proposed for LF image SR. Specifically, an aggregated module is first used to fuse the angular information based on a volume transformer with plane sweep volume. Then the aggregated feature is warped to all LF views using a cross-view transformer for nonlocal dependencies utilization. The experimental results demonstrate that our method outperforms existing state-of-the-art methods with a comparative computational cost, and fine details and clear structures can be restored.

14.
J Opt Soc Am A Opt Image Sci Vis ; 39(12): 2316-2324, 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36520753

RESUMO

Due to the limited pixel pitch of the spatial light modulator (SLM), the field of view (FOV) is insufficient to meet binocular observation needs. Here, an optimized controlling light method of a binocular holographic three-dimensional (3D) display system based on the holographic optical element (HOE) is proposed. The synthetic phase-only hologram uploaded onto the SLM is generated with the layer-based angular spectrum diffraction theory, and two different reference waves are introduced to separate the left view and the right view of the 3D scene. The HOE with directional controlling light parameters is employed to guide binocular information into the left-eye and the right-eye viewing zones simultaneously. Optical experiments verify that the proposed system can achieve binocular holographic augmented reality 3D effect successfully with real physical depth, which can eliminate the accommodation-vergence conflict and visual fatigue problem. For each perspective, the FOV is 8.7° when the focal length of the HOE is 10 cm. The width of the viewing zone is 2.3 cm when the viewing distance is 25 cm.

15.
Appl Opt ; 61(7): D7-D14, 2022 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-35297823

RESUMO

Stereo depth estimation is an efficient method to perceive three-dimensional structures in real scenes. In this paper, we propose a novel self-supervised method, to the best of our knowledge, to extract depth information by learning bi-directional pixel movement with convolutional neural networks (CNNs). Given left and right views, we use CNNs to learn the task of middle-view synthesis for perceiving bi-directional pixel movement from left-right views to the middle view. The information of pixel movement will be stored in the features after CNNs are trained. Then we use several convolutional layers to extract the information of pixel movement for estimating a depth map of the given scene. Experiments show that our proposed method can significantly provide a high-quality depth map using only a color image as a supervisory signal.


Assuntos
Movimento , Redes Neurais de Computação
16.
Opt Express ; 29(24): 40125-40145, 2021 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-34809361

RESUMO

For a floating three-dimensional (3D) display system using a prism type retroreflector, non-retroreflected light and a blurred 3D image source are two key causes of the deterioration in image quality. In the present study, ray tracing is used to analyze the light distribution of a retroreflector at different incident angles. Based on this analysis, a telecentric retroreflector (TCRR) is proposed to suppress non-retroreflected light without sacrificing the viewing angle. A contrast transfer function (CTF) is used to evaluate the optical performance of the TCRR. To improve the 3D image source, the relationship between the root mean square (RMS) of the voxels and the 3D image quality is discussed, and an aspheric lens array is designed to reduce aberrations. Computational simulation results reveal that the structural similarity (SSIM) of the 3D image source increased to 0.9415. An experimental prototype system combining the TCRR and optimized 3D image source is then built. Experimental analysis demonstrates that the proposed method suppresses non-retroreflected light and improves the 3D image source. In particular, a clear floating 3D image with a floating distance of 70 mm and a viewing angle of 50° can be achieved.

17.
Opt Express ; 29(23): 37862-37876, 2021 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-34808851

RESUMO

Three-Dimensional (3D) light-field display plays a vital role in realizing 3D display. However, the real-time high quality 3D light-field display is difficult, because super high-resolution 3D light field images are hard to be achieved in real-time. Although extensive research has been carried out on fast 3D light-field image generation, no single study exists to satisfy real-time 3D image generation and display with super high-resolution such as 7680×4320. To fulfill real-time 3D light-field display with super high-resolution, a two-stage 3D image generation method based on path tracing and image super-resolution (SR) is proposed, which takes less time to render 3D images than previous methods. In the first stage, path tracing is used to generate low-resolution 3D images with sparse views based on Monte-Carlo integration. In the second stage, a lite SR algorithm based on a generative adversarial network (GAN) is presented to up-sample the low-resolution 3D images to high-resolution 3D images of dense views with photo-realistic image quality. To implement the second stage efficiently and effectively, the elemental images (EIs) are super-resolved individually for better image quality and geometry accuracy, and a foreground selection scheme based on ray casting is developed to improve the rendering performance. Finally, the output EIs from CNN are used to recompose the high-resolution 3D images. Experimental results demonstrate that real-time 3D light-field display over 30fps at 8K resolution can be realized, while the structural similarity (SSIM) can be over 0.90. It is hoped that the proposed method will contribute to the field of real-time 3D light-field display.

18.
Opt Express ; 29(21): 34035-34050, 2021 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-34809202

RESUMO

Three-dimensional (3D) light-field displays (LFDs) suffer from a narrow viewing angle, limited depth range, and low spatial information capacity, which limit their diversified application. Because the number of pixels used to construct 3D spatial information is limited, increasing the viewing angle reduces the viewpoint density, which degrades the 3D performance. A solution based on a holographic functional screen (HFS) and a ladder-compound lenticular lens unit (LC-LLU) is proposed to increase the viewing angle while optimizing the viewpoint utilization. The LC-LLU and HFS are used to create 160 non-uniformly distributed viewpoints with low crosstalk, which increases the viewpoint density in the middle viewing zone and provides clear monocular depth cues. The corresponding coding method is presented as well. The optimized compound lenticular lens array can balance between suppressing aberration and improving displayed quality. The simulations and experiments show that the proposed 3D LFD can present natural 3D images with the right perception and occlusion relationship within a 65° viewing angle.

19.
Opt Express ; 29(7): 11009-11020, 2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33820222

RESUMO

Lens aberrations degrade the image quality and limit the viewing angle of light-field displays. In the present study, an approach to aberration reduction based on a pre-correction convolutional neural network (CNN) is demonstrated. The pre-correction CNN is employed to transform the elemental image array (EIA) generated by a virtual camera array into a pre-corrected EIA (PEIA). The pre-correction CNN is built and trained based on the aberrations of the lens array. The resulting PEIA, rather than the EIA, is presented on the liquid crystal display. Via the optical transformation of the lens array, higher quality 3D images are obtained. The validity of the proposed method is confirmed through simulations and optical experiments. A 70-degree viewing angle light field display with the improved image quality is demonstrated.

20.
Opt Express ; 29(5): 7435-7452, 2021 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-33726245

RESUMO

Time-multiplexed light-field displays (TMLFDs) can provide natural and realistic three-dimensional (3D) performance with a wide 120° viewing angle, which provides broad potential applications in 3D electronic sand table (EST) technology. However, current TMLFDs suffer from severe crosstalk, which can lead to image aliasing and the distortion of the depth information. In this paper, the mechanisms underlying the emergence of crosstalk in TMLFD systems are identified and analyzed. The results indicate that the specific structure of the slanted lenticular lens array (LLA) and the non-uniformity of the emergent light distribution in the lens elements are the two main factors responsible for the crosstalk. In order to produce clear depth perception and improve the image quality, a novel ladder-type LCD sub-pixel arrangement and a compound lens with three aspheric surfaces are proposed and introduced into a TMLFD to respectively reduce the two types of crosstalk. Crosstalk simulation experiments demonstrate the validity of the proposed methods. Structural similarity (SSIM) simulation experiments and light-field reconstruction experiments also indicate that aliasing is effectively reduced and the depth quality is significantly improved over the entire viewing range. In addition, a tabletop 3D EST based on the proposed TMLFD is presented. The proposed approaches to crosstalk reduction are also compatible with other lenticular lens-based 3D displays.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA