RESUMO
Nanostructured materials have recently emerged as a promising approach for material appearance design. Research has mainly focused on creating structural colours by wave interference, leaving aside other important aspects that constitute the visual appearance of an object, such as the respective weight of specular and diffuse reflectances, object macroscopic shape, illumination and viewing conditions. Here we report the potential of disordered optical metasurfaces to harness visual appearance. We develop a multiscale modelling platform for the predictive rendering of macroscopic objects covered by metasurfaces in realistic settings, and show how nanoscale resonances and mesoscale interferences can be used to spectrally and angularly shape reflected light and thus create unusual visual effects at the macroscale. We validate this property with realistic synthetic images of macroscopic objects and centimetre-scale samples observable with the naked eye. This framework opens new perspectives in many branches of fine and applied visual arts.
RESUMO
A focused plenoptic camera has the ability to record and separate spatial and directional information of the incoming light. Combined with the appropriate algorithm, a 3D scene could be reconstructed from a single acquisition, over a depth range called plenoptic depth-of-field. In this Letter, we study the contrast variations with depth as a way to assess plenoptic depth-of-field. We take into account the impact of diffraction, defocus, and magnification on the resulting contrast. We measure the contrast directly on both simulated and acquired images. We demonstrate the importance of diffraction and magnification in the final contrast. Contrary to classical optics, the maximum of contrast is not centered around the main object plane, but around a shifted position, with a fast and nonsymmetric decrease of contrast.
RESUMO
We present a transparent autostereoscopic display consisting of laser picoprojectors, a wedge light guide, and a holographic optical element. The holographic optical element is optically recorded, and we present the recording setup, our prototype, as well as the results. Such a display can superimpose 3D data on the real world without any wearable.
RESUMO
We propose an automatic framework to recover the illumination of indoor scenes based on a single RGB-D image. Unlike previous works, our method can recover spatially varying illumination without using any lighting capturing devices or HDR information. The recovered illumination can produce realistic rendering results. To model the geometry of the visible and invisible parts of scenes corresponding to the input RGB-D image, we assume that all objects shown in the image are located in a box with six faces and build a planar-based geometry model based on the input depth map. We then present a confidence-scoring based strategy to separate the light sources from the highlight areas. The positions of light sources both in and out of the camera's view are calculated based on the classification result and the recovered geometry model. Finally, an iterative procedure is proposed to calculate the colors of light sources and the materials in the scene. In addition, a data-driven method is used to set constraints on the light source intensities. Using the estimated light sources and geometry model, environment maps at different points in the scene are generated that can model the spatial variance of illumination. The experimental results demonstrate the validity and flexibility of our approach.
RESUMO
The possibility to use real world light sources (aka luminaires) for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on light fields are attractive due to their ability to represent faithfully the near-field and due to their possibility of being directly acquired. In this paper, we introduce a dynamic sampling strategy for complex light field luminaires with the corresponding unbiased estimator. The sampling strategy is adapted, for each 3D scene position and each frame, by restricting the sampling domain dynamically and by balancing the number of samples between the different components of the representation. This is achieved efficiently by simple position-dependent affine transformations and restrictions of Cumulative Distributive Functions that ensure that every generated sample conveys energy and contributes to the final result. Therefore, our approach only requires a low number of samples to achieve almost converged results. We demonstrate the efficiency of our approach on modern hardware by introducing a GPU-based implementation. Combined with a fast shadow algorithm, our solution exhibits interactive frame rates for direct lighting for large measured luminaires.
RESUMO
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.
Assuntos
Interface Usuário-Computador , Algoritmos , Gráficos por Computador , Humanos , Iluminação , Movimento (Física) , Gravação em VídeoRESUMO
Over the last two decades, much effort has been devoted to accurately measuring Bidirectional Reflectance Distribution Functions (BRDFs) of real-world materials and to use efficiently the resulting data for rendering. Because of their large size, it is difficult to use directly measured BRDFs for real-time applications, and fitting the most sophisticated analytical BRDF models is still a complex task. In this paper, we introduce Rational BRDF, a general-purpose and efficient representation for arbitrary BRDFs, based on Rational Functions (RFs). Using an adapted parametrization, we demonstrate how Rational BRDFs offer 1) a more compact and efficient representation using low-degree RFs, 2) an accurate fitting of measured materials with guaranteed control of the residual error, and 3) efficient importance sampling by applying the same fitting process to determine the inverse of the Cumulative Distribution Function (CDF) generated from the BRDF for use in Monte-Carlo rendering.
RESUMO
Based on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated with surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and interreflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real time on modern graphics hardware making it suitable for any interactive 3D visualization.