Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters

Affiliation country
Publication year range
1.
Opt Express ; 30(18): 33208-33221, 2022 Aug 29.
Article in English | MEDLINE | ID: mdl-36242366

ABSTRACT

Waveguides have become one of the most promising optical combiners for see-through near-eye displays due to the thickness, weight, and transmittance. In this study, we propose a waveguide-type near-eye display using a pin-mirror array and a concaved reflector with a compact outlook, optimized image uniformity and stray light. Issues have been discussed in detail, which include field of view (FOV), eye-box, resolution, depth of field (DOF), display uniformity and stray light artifacts. It can be shown that the DOF can be extended (when compared with traditional waveguide-type near-eye displays) to alleviate the vergence-accommodation conflict (VAC) problem, and the uniformity & stray light can be improved with an optimal structure. Moreover, reflective surfaces have been introduced as the input and output coupling with a compact outlook, an easy-processing structure and the achromatic performance. A prototype based on the proposed method have been successfully developed, and virtual images with an extended DOF can be shown along with the real-world.


Subject(s)
Accommodation, Ocular , Equipment Design
2.
Appl Opt ; 56(22): 6059-6064, 2017 Aug 01.
Article in English | MEDLINE | ID: mdl-29047795

ABSTRACT

In integral imaging, one of the main challenges is the limited depth of field (DOF), which is mainly caused by the short focal length of the microlenses. In this paper, we propose a method to extend the DOF of a synthetic aperture integral imaging (SAII) system by realizing the image fusion method on the multi-focus elemental images with different perspectives. In the proposed system, the contour-based object extraction method combined with size correction is developed to solve size inconsistency of the objects in the misaligned elemental images. The all-in-focus elemental images (EIs) combining selected features of multi-focus elemental images are then obtained by the block-based image fusion method. In the last step, the reconstructed images with the extended DOF can be generated based on the all-in-focus EIs in the SAII system. Experimental results are presented to demonstrate the feasibility of the proposed system.

3.
Opt Lett ; 39(14): 4212-4, 2014 Jul 15.
Article in English | MEDLINE | ID: mdl-25121689

ABSTRACT

We propose a new off-axially distributed image sensing (ODIS) using a wide-angle lens for reconstructing distortion-free wide-angle slice images computationally. In the proposed system, the wide-angle image sensor captures a wide-angle 3D scene, and thus the collected information of the 3D objects is severely distorted. To correct this distortion, we introduce a new correction process involving a wide-angle lens to the computational reconstruction in ODIS. This enables us to reconstruct distortion-free, wide-angle slice images for visualization of 3D objects. Experimental results are carried out to verify the proposed method. To the best of our knowledge, this is the first time the use of a wide-angle lens in a multiple-perspective 3D imaging system is described.

4.
Opt Lett ; 38(16): 3162-4, 2013 Aug 15.
Article in English | MEDLINE | ID: mdl-24104676

ABSTRACT

This Letter presents an off-axially distributed image sensing (ODIS) system for three-dimensional (3D) imaging and visualization. The off-axially distributed sensing method provides both lateral and longitudinal perspectives for 3D scenes even though the sensor moves along a slanted, one-dimensional path. A 3D volume is generated from a set of recorded images by use of a computational algorithm based on ray backprojection. Preliminary experimental results are presented to illustrate the feasibility of the proposed system. To the best of our knowledge, this is the first report on 3D imaging and visualization using ODIS.

5.
IEEE Trans Cybern ; 53(1): 379-391, 2023 Jan.
Article in English | MEDLINE | ID: mdl-34406954

ABSTRACT

Most existing light field saliency detection methods have achieved great success by exploiting unique light field data-focus information in focal slices. However, they process light field data in a slicewise way, leading to suboptimal results because the relative contribution of different regions in focal slices is ignored. How we can comprehensively explore and integrate focused saliency regions that would positively contribute to accurate saliency detection. Answering this question inspires us to develop a new insight. In this article, we propose a patch-aware network to explore light field data in a regionwise way. First, we excavate focused salient regions with a proposed multisource learning module (MSLM), which generates a filtering strategy for integration followed by three guidances based on saliency, boundary, and position. Second, we design a sharpness recognition module (SRM) to refine and update this strategy and perform feature integration. With our proposed MSLM and SRM, we can obtain more accurate and complete saliency maps. Comprehensive experiments on three benchmark datasets prove that our proposed method achieves competitive performance over 2-D, 3-D, and 4-D salient object detection methods. The code and results of our method are available at https://github.com/OIPLab-DUT/IEEE-TCYB-PANet.

6.
IEEE Trans Image Process ; 32: 5340-5352, 2023.
Article in English | MEDLINE | ID: mdl-37729570

ABSTRACT

Depth data with a predominance of discriminative power in location is advantageous for accurate salient object detection (SOD). Existing RGBD SOD methods have focused on how to properly use depth information for complementary fusion with RGB data, having achieved great success. In this work, we attempt a far more ambitious use of the depth information by injecting the depth maps into the encoder in a single-stream model. Specifically, we propose a depth injection framework (DIF) equipped with an Injection Scheme (IS) and a Depth Injection Module (DIM). The proposed IS enhances the semantic representation of the RGB features in the encoder by directly injecting depth maps into the high-level encoder blocks, while helping our model maintain computational convenience. Our proposed DIM acts as a bridge between the depth maps and the hierarchical RGB features of the encoder and helps the information of two modalities complement and guide each other, contributing to a great fusion effect. Experimental results demonstrate that our proposed method can achieve state-of-the-art performance on six RGBD datasets. Moreover, our method can achieve excellent performance on RGBT SOD and our DIM can be easily applied to single-stream SOD models and the transformer architecture, proving a powerful generalization ability.

7.
IEEE Trans Image Process ; 31: 6152-6163, 2022.
Article in English | MEDLINE | ID: mdl-36112561

ABSTRACT

Previous 2D saliency detection methods extract salient cues from a single view and directly predict the expected results. Both traditional and deep-learning-based 2D methods do not consider geometric information of 3D scenes. Therefore the relationship between scene understanding and salient objects cannot be effectively established. This limits the performance of 2D saliency detection in challenging scenes. In this paper, we show for the first time that saliency detection problem can be reformulated as two sub-problems: light field synthesis from a single view and light-field-driven saliency detection. This paper first introduces a high-quality light field synthesis network to produce reliable 4D light field information. Then a novel light-field-driven saliency detection network is proposed, in which a Direction-specific Screening Unit (DSU) is tailored to exploit the spatial correlation among multiple viewpoints. The whole pipeline can be trained in an end-to-end fashion. Experimental results demonstrate that the proposed method outperforms the state-of-the-art 2D, 3D and 4D saliency detection methods. Our code is publicly available at https://github.com/OIPLab-DUT/ESCNet.

8.
IEEE Trans Image Process ; 31: 2321-2336, 2022.
Article in English | MEDLINE | ID: mdl-35245195

ABSTRACT

In this work, we propose a novel depth-induced multi-scale recurrent attention network for RGB-D saliency detection, named as DMRA. It achieves dramatic performance especially in complex scenarios. There are four main contributions of our network that are experimentally demonstrated to have significant practical merits. First, we design an effective depth refinement block using residual connections to fully extract and fuse cross-modal complementary cues from RGB and depth streams. Second, depth cues with abundant spatial information are innovatively combined with multi-scale contextual features for accurately locating salient objects. Third, a novel recurrent attention module inspired by Internal Generative Mechanism of human brain is designed to generate more accurate saliency results via comprehensively learning the internal semantic relation of the fused feature and progressively optimizing local details with memory-oriented scene understanding. Finally, a cascaded hierarchical feature fusion strategy is designed to promote efficient information interaction of multi-level contextual features and further improve the contextual representability of model. In addition, we introduce a new real-life RGB-D saliency dataset containing a variety of complex scenarios that has been widely used as a benchmark dataset in recent RGB-D saliency detection research. Extensive empirical experiments demonstrate that our method can accurately identify salient objects and achieve appealing performance against 18 state-of-the-art RGB-D saliency models on nine benchmark datasets.


Subject(s)
Benchmarking , Semantics , Humans
9.
Appl Opt ; 50(28): 5369-81, 2011 Oct 01.
Article in English | MEDLINE | ID: mdl-22016203

ABSTRACT

In this paper, we propose an effective approach for reconstructing visibility-enhanced three-dimensional (3D) objects under the heavily scattering medium of dense fog in the conventional integral imaging system through the combined use of the intermediate view reconstruction (IVR), multipixel extraction (MPE), and histogram equalization (HE) methods. In the proposed system, the limited number of elemental images (EIs) picked up from the 3D objects under the dense fog is increased by as many as required by using the IVR technique. The increased number of EIs is transformed into the subimages (SIs) in which the resolution of the transformed SIs has been also improved as much as possible with the MPE method. Subsequently, by using the HE algorithm, the histogram of the resolution-enhanced SIs is uniformly redistributed over the entire range of discrete pixel levels of the image in a way that the subimage contrast can be much enhanced. Then, these equalized SIs are converted back into the newly modified EIs, and consequently a visibility-enhanced 3D object image can be reconstructed. Successful experimental results with the test object confirmed the feasibility of the proposed method.

10.
Article in English | MEDLINE | ID: mdl-32365027

ABSTRACT

In this work, we propose a novel light field fusion network-LFNet, a CNNs-based light field saliency model using 4D light field data containing abundant spatial and contextual information. The proposed method can reliably locate and identify salient objects even in a complex scene. Our LFNet contains a light field refinement module (LFRM) and a light field integration module (LFIM) which can fully refine and integrate focusness, depths and objectness cues from light field image. The LFRM learns the light field residual between light field and RGB images for refining features with useful light field cues, and then the LFIM weights each refined light field feature and learns spatial correlation between them to predict saliency maps. Our method can take full advantage of light field information and achieve excellent performance especially in complex scenes, e.g., similar foreground and background, multiple or transparent objects and low-contrast environment. Experiments show our method outperforms the state-of-the-art 2D, 3D and 4D methods across three light field datasets.

11.
Appl Opt ; 48(34): H222-30, 2009 Dec 01.
Article in English | MEDLINE | ID: mdl-19956294

ABSTRACT

In this paper, we propose a novel approach for resolution-enhanced computational reconstruction of far 3-D objects by employing a direct pixel mapping (DPM) method in the curving-effective integral imaging (CEII) system. In this method, by using the DPM method, an elemental image array (EIA) picked up from a far 3-D object can be computationally transformed into a new EIA, which virtually looks like the EIA picked up from a near object. Therefore, with this newly transformed EIA a much better resolution-enhanced object image can be reconstructed in the CEII system. Good experimental results confirmed the feasibility of the proposed method.

12.
Article in English | MEDLINE | ID: mdl-31613755

ABSTRACT

Incorrect saliency detection such as false alarms and missed alarms may lead to potentially severe consequences in various application areas. Effective separation of salient objects in complex scenes is a major challenge in saliency detection. In this paper, we propose a new method for saliency detection on light field to improve the saliency detection in challenging scenes. We construct an object-guided depth map, which acts as an inducer to efficiently incorporate the relations among light field cues, by using abundant light field cues. Furthermore, we enforce spatial consistency by constructing an optimization model, named Depth-induced Cellular Automata (DCA), in which the saliency value of each superpixel is updated by exploiting the intrinsic relevance of its similar regions. Additionally, the proposed DCA model enables inaccurate saliency maps to achieve a high level of accuracy. We analyze our approach on one publicly available dataset. Experiments show the proposed method is robust to a wide range of challenging scenes and outperforms the state-of-the-art 2D/3D/4D (light-field) saliency detection approaches.

SELECTION OF CITATIONS
SEARCH DETAIL