Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Nat Commun ; 15(1): 2907, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38649369

RESUMEN

Holographic displays can generate light fields by dynamically modulating the wavefront of a coherent beam of light using a spatial light modulator, promising rich virtual and augmented reality applications. However, the limited spatial resolution of existing dynamic spatial light modulators imposes a tight bound on the diffraction angle. As a result, modern holographic displays possess low étendue, which is the product of the display area and the maximum solid angle of diffracted light. The low étendue forces a sacrifice of either the field-of-view (FOV) or the display size. In this work, we lift this limitation by presenting neural étendue expanders. This new breed of optical elements, which is learned from a natural image dataset, enables higher diffraction angles for ultra-wide FOV while maintaining both a compact form factor and the fidelity of displayed contents to human viewers. With neural étendue expanders, we experimentally achieve 64 × étendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically, with high-fidelity reconstruction quality (measured in PSNR) over 29 dB on retinal-resolution images.

2.
ACS Photonics ; 11(3): 816-865, 2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38550347

RESUMEN

Metasurfaces have recently risen to prominence in optical research, providing unique functionalities that can be used for imaging, beam forming, holography, polarimetry, and many more, while keeping device dimensions small. Despite the fact that a vast range of basic metasurface designs has already been thoroughly studied in the literature, the number of metasurface-related papers is still growing at a rapid pace, as metasurface research is now spreading to adjacent fields, including computational imaging, augmented and virtual reality, automotive, display, biosensing, nonlinear, quantum and topological optics, optical computing, and more. At the same time, the ability of metasurfaces to perform optical functions in much more compact optical systems has triggered strong and constantly growing interest from various industries that greatly benefit from the availability of miniaturized, highly functional, and efficient optical components that can be integrated in optoelectronic systems at low cost. This creates a truly unique opportunity for the field of metasurfaces to make both a scientific and an industrial impact. The goal of this Roadmap is to mark this "golden age" of metasurface research and define future directions to encourage scientists and engineers to drive research and development in the field of metasurfaces toward both scientific excellence and broad industrial adoption.

3.
Opt Express ; 31(1): 125-142, 2023 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36606955

RESUMEN

The simulation of rare edge cases such as adverse weather conditions is the enabler for the deployment of the next generation of autonomous drones and vehicles into conditions where human operation is error-prone. Therefore, such settings must be simulated as accurately as possible and be computationally efficient, so to allow the training of deep learning algorithms for scene understanding, which require large-scale datasets disallowing extensive Monte Carlo simulations. One computationally-expensive step is the simulation of light sources in scattering media, which can be tackled by the radiative transfer equation and approximated by analytical solutions in the following. Traditionally, a single scattering event is assumed for fog rendering, since it is the dominant effect for relatively low scattering media. This assumption allows us to present an improved solution to calculate the so called air-light integral that can be evaluated fast and robustly for an isotropic point source in homogeneous media. Additionally, the solution is extended for a cone-shaped source and implemented in a computer vision rendering pipeline fulfilling computational restrictions for deep learning uses. All solutions can handle arbitrary azimuthally symmetric phase functions and were tested with the Henyey-Greenstein phase function and an advection fog phase function calculated from a particle distribution using Mie's theory. The used approximations are validated through extensive Monte Carlo simulations and the solutions are used to augment good weather images towards inclement conditions with focus on visible light sources, so to provide additional data in such hard-to-collect settings.

4.
Opt Express ; 31(26): 43864-43876, 2023 Dec 18.
Artículo en Inglés | MEDLINE | ID: mdl-38178472

RESUMEN

Diffractive optical elements (DOEs) have widespread applications in optics, ranging from point spread function engineering to holographic display. Conventionally, DOE design relies on Cartesian simulation grids, resulting in square features in the final design. Unfortunately, Cartesian grids provide an anisotropic sampling of the plane, and the resulting square features can be challenging to fabricate with high fidelity using methods such as photolithography. To address these limitations, we explore the use of hexagonal grids as a new grid structure for DOE design and fabrication. In this study, we demonstrate wave propagation simulation using an efficient hexagonal coordinate system and compare simulation accuracy with the standard Cartesian sampling scheme. Additionally, we have implemented algorithms for the inverse DOE design. The resulting hexagonal DOEs, encoded with wavefront information for holograms, are fabricated and experimentally compared to their Cartesian counterparts. Our findings indicate that employing hexagonal grids enhances holographic imaging quality. The exploration of new grid structures holds significant potential for advancing optical technology across various domains, including imaging, microscopy, photography, lighting, and virtual reality.

5.
Sci Rep ; 12(1): 11905, 2022 07 13.
Artículo en Inglés | MEDLINE | ID: mdl-35831474

RESUMEN

Hyperspectral imaging enables many versatile applications for its competence in capturing abundant spatial and spectral information, which is crucial for identifying substances. However, the devices for acquiring hyperspectral images are typically expensive and very complicated, hindering the promotion of their application in consumer electronics, such as daily food inspection and point-of-care medical screening, etc. Recently, many computational spectral imaging methods have been proposed by directly reconstructing the hyperspectral information from widely available RGB images. These reconstruction methods can exclude the usage of burdensome spectral camera hardware while keeping a high spectral resolution and imaging performance. We present a thorough investigation of more than 25 state-of-the-art spectral reconstruction methods which are categorized as prior-based and data-driven methods. Simulations on open-source datasets show that prior-based methods are more suitable for rare data situations, while data-driven methods can unleash the full potential of deep learning in big data cases. We have identified current challenges faced by those methods (e.g., loss function, spectral accuracy, data generalization) and summarized a few trends for future work. With the rapid expansion in datasets and the advent of more advanced neural networks, learnable methods with fine feature representation abilities are very promising. This comprehensive review can serve as a fruitful reference source for peer researchers, thus paving the way for the development of computational hyperspectral imaging.


Asunto(s)
Imágenes Hiperespectrales , Redes Neurales de la Computación , Frutas
6.
Nat Commun ; 12(1): 6493, 2021 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-34845201

RESUMEN

Nano-optic imagers that modulate light at sub-wavelength scales could enable new applications in diverse domains ranging from robotics to medicine. Although metasurface optics offer a path to such ultra-small imagers, existing methods have achieved image quality far worse than bulky refractive alternatives, fundamentally limited by aberrations at large apertures and low f-numbers. In this work, we close this performance gap by introducing a neural nano-optics imager. We devise a fully differentiable learning framework that learns a metasurface physical structure in conjunction with a neural feature-based image reconstruction algorithm. Experimentally validating the proposed method, we achieve an order of magnitude lower reconstruction error than existing approaches. As such, we present a high-quality, nano-optic imager that combines the widest field-of-view for full-color metasurface operation while simultaneously achieving the largest demonstrated aperture of 0.5 mm at an f-number of 2.

7.
Sci Rep ; 8(1): 17726, 2018 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-30531961

RESUMEN

Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.

8.
IEEE Comput Graph Appl ; 38(6): 106-117, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30295616

RESUMEN

While traditional imaging systems directly measure scene properties, computational imaging systems add computation to the measurement process, allowing such systems to extract nontrivially encoded scene features. This work demonstrates that exploiting the structure in this process allows us to recover information that is conventionally considered to be "lost." Relying on temporally and spatially convolutional structure, we extract a novel image modality that was essentially "invisible" before: a new temporal dimension of light propagation, obtained with consumer depth cameras. Using conventional time-of-flight cameras, a few seconds of capture and computation allow us to recover information that before could only be acquired in hours of capture time with specialized instrumentation at orders of magnitude higher cost. The novel type of image we capture allows us to make the first steps toward the full inversion of light transport. Specifically, we demonstrate that non-line-of-sight imaging and imaging in scattering media can be made feasible with the temporally resolved light transport acquired using time-of-flight depth cameras.

9.
Artículo en Inglés | MEDLINE | ID: mdl-29993740

RESUMEN

Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

10.
IEEE Trans Image Process ; 27(4): 1611-1625, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29324415

RESUMEN

Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

11.
Sci Rep ; 6: 33543, 2016 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-27633055

RESUMEN

Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

12.
Opt Express ; 23(24): 31393-407, 2015 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-26698765

RESUMEN

Diffractive optical elements (DOE) show great promise for imaging optics that are thinner and more lightweight than conventional refractive lenses while preserving their light efficiency. Unfortunately, severe spectral dispersion currently limits the use of DOEs in consumer-level lens design. In this article, we jointly design lightweight diffractive-refractive optics and post-processing algorithms to enable imaging under white light illumination. Using the Fresnel lens as a general platform, we show three phase-plate designs, including a super-thin stacked plate design, a diffractive-refractive-hybrid lens, and a phase coded-aperture lens. Combined with cross-channel deconvolution algorithm, both spherical and chromatic aberrations are corrected. Experimental results indicate that using our computational imaging approach, diffractive-refractive optics is an alternative candidate to build light efficient and thin optics for white light imaging.

13.
IEEE Trans Image Process ; 24(10): 3071-85, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25974941

RESUMEN

Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms.

14.
Opt Express ; 22(21): 26338-50, 2014 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-25401666

RESUMEN

Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.


Asunto(s)
Algoritmos , Simulación por Computador , Luz , Modelos Teóricos , Nefelometría y Turbidimetría/instrumentación , Tecnología de Sensores Remotos/instrumentación , Dispersión de Radiación , Diseño de Equipo
15.
Opt Express ; 22(12): 14981-92, 2014 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-24977592

RESUMEN

Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA