Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Opt Express ; 32(4): 5174-5190, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38439250

RESUMO

Improving images captured under low-light conditions has become an important topic in computational color imaging, as it has a wide range of applications. Most current methods are either based on handcrafted features or on end-to-end training of deep neural networks that mostly focus on minimizing some distortion metric -such as PSNR or SSIM- on a set of training images. However, the minimization of distortion metrics does not mean that the results are optimal in terms of perception (i.e. perceptual quality). As an example, the perception-distortion trade-off states that, close to the optimal results, improving distortion results in worsening perception. This means that current low-light image enhancement methods -that focus on distortion minimization- cannot be optimal in the sense of obtaining a good image in terms of perception errors. In this paper, we propose a post-processing approach in which, given the original low-light image and the result of a specific method, we are able to obtain a result that resembles as much as possible that of the original method, but, at the same time, giving an improvement in the perception of the final image. More in detail, our method follows the hypothesis that in order to minimally modify the perception of an input image, any modification should be a combination of a local change in the shading across a scene and a global change in illumination color. We demonstrate the ability of our method quantitatively using perceptual blind image metrics such as BRISQUE, NIQE, or UNIQUE, and through user preference tests.

2.
Sensors (Basel) ; 23(8)2023 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-37112497

RESUMO

Recently, many deep neural networks (DNN) have been proposed to solve the spectral reconstruction (SR) problem: recovering spectra from RGB measurements. Most DNNs seek to learn the relationship between an RGB viewed in a given spatial context and its corresponding spectra. Significantly, it is argued that the same RGB can map to different spectra depending on the context with respect to which it is seen and, more generally, that accounting for spatial context leads to improved SR. However, as it stands, DNN performance is only slightly better than the much simpler pixel-based methods where spatial context is not used. In this paper, we present a new pixel-based algorithm called A++ (an extension of the A+ sparse coding algorithm). In A+, RGBs are clustered, and within each cluster, a designated linear SR map is trained to recover spectra. In A++, we cluster the spectra instead in an attempt to ensure neighboring spectra (i.e., spectra in the same cluster) are recovered by the same SR map. A polynomial regression framework is developed to estimate the spectral neighborhoods given only the RGB values in testing, which in turn determines which mapping should be used to map each testing RGB to its reconstructed spectrum. Compared to the leading DNNs, not only does A++ deliver the best results, it is parameterized by orders of magnitude fewer parameters and has a significantly faster implementation. Moreover, in contradistinction to some DNN methods, A++ uses pixel-based processing, which is robust to image manipulations that alter the spatial context (e.g., blurring and rotations). Our demonstration on the scene relighting application also shows that, while SR methods, in general, provide more accurate relighting results compared to the traditional diagonal matrix correction, A++ provides superior color accuracy and robustness compared to the top DNN methods.

3.
Opt Express ; 30(12): 22006-22024, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-36224909

RESUMO

In previous work, it was shown that a camera can theoretically be made more colorimetric-its RGBs become more linearly related to XYZ tristimuli-by placing a specially designed color filter in the optical path. While the prior art demonstrated the principle, the optimal color-correction filters were not actually manufactured. In this paper, we provide a novel way of creating the color filtering effect without making a physical filter: we modulate the spectrum of the light source by using a spectrally tunable lighting system to recast the prefiltering effect from a lighting perspective. According to our method, if we wish to measure color under a D65 light, we relight the scene with a modulated D65 spectrum where the light modulation mimics the effect of color prefiltering in the prior art. We call our optimally modulated light, the matched illumination. In the experiments, using synthetic and real measurements, we show that color measurement errors can be reduced by about 50% or more on simulated data and 25% or more on real images when the matched illumination is used.

4.
Sensors (Basel) ; 21(16)2021 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-34451030

RESUMO

Spectral reconstruction (SR) algorithms attempt to recover hyperspectral information from RGB camera responses. Recently, the most common metric for evaluating the performance of SR algorithms is the Mean Relative Absolute Error (MRAE)-an ℓ1 relative error (also known as percentage error). Unsurprisingly, the leading algorithms based on Deep Neural Networks (DNN) are trained and tested using the MRAE metric. In contrast, the much simpler regression-based methods (which actually can work tolerably well) are trained to optimize a generic Root Mean Square Error (RMSE) and then tested in MRAE. Another issue with the regression methods is-because in SR the linear systems are large and ill-posed-that they are necessarily solved using regularization. However, hitherto the regularization has been applied at a spectrum level, whereas in MRAE the errors are measured per wavelength (i.e., per spectral channel) and then averaged. The two aims of this paper are, first, to reformulate the simple regressions so that they minimize a relative error metric in training-we formulate both ℓ2 and ℓ1 relative error variants where the latter is MRAE-and, second, we adopt a per-channel regularization strategy. Together, our modifications to how the regressions are formulated and solved leads to up to a 14% increment in mean performance and up to 17% in worst-case performance (measured with MRAE). Importantly, our best result narrows the gap between the regression approaches and the leading DNN model to around 8% in mean accuracy.


Assuntos
Algoritmos , Redes Neurais de Computação , Imagens de Fantasmas
5.
Opt Express ; 28(7): 9327-9339, 2020 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-32225542

RESUMO

Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods.

6.
Sensors (Basel) ; 20(23)2020 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-33276453

RESUMO

By placing a color filter in front of a camera we make new spectral sensitivities. The Luther-condition optimization solves for a color filter so that the camera's filtered sensitivities are as close to being linearly related to the XYZ color matching functions (CMFs) as possible, that is, a filter is found that makes the camera more colorimetric. Arguably, the more general Vora-Value approach solves for the filter that best matches all possible target spectral sensitivity sets (e.g., any linear combination of the XYZ CMFs). A concern that we investigate here is that the filters found by the Luther and Vora-Value optimizations are different from one another. In this paper, we unify the Luther and Vora-Value approaches to prefilter design. We prove that if the target of the Luther-condition optimization is an orthonormal basis-a special linear combination of the XYZ CMFs which are orthogonal and are in unit length-the discovered Luther-filter is also the filter that maximizes the Vora-Value. A key advantage of using the Luther-condition formulation to maximize the Vora-Value is that it is both simpler to implement and converges to its optimal answer more quickly. Experiments validate our method.

7.
Sensors (Basel) ; 20(21)2020 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-33182473

RESUMO

Spectral reconstruction algorithms recover spectra from RGB sensor responses. Recent methods-with the very best algorithms using deep learning-can already solve this problem with good spectral accuracy. However, the recovered spectra are physically incorrect in that they do not induce the RGBs from which they are recovered. Moreover, if the exposure of the RGB image changes then the recovery performance often degrades significantly-i.e., most contemporary methods only work for a fixed exposure. In this paper, we develop a physically accurate recovery method: the spectra we recover provably induce the same RGBs. Key to our approach is the idea that the set of spectra that integrate to the same RGB can be expressed as the sum of a unique fundamental metamer (spanned by the camera's spectral sensitivities and linearly related to the RGB) and a linear combination of a vector space of metameric blacks (orthogonal to the spectral sensitivities). Physically plausible spectral recovery resorts to finding a spectrum that adheres to the fundamental metamer plus metameric black decomposition. To further ensure spectral recovery that is robust to changes in exposure, we incorporate exposure changes in the training stage of the developed method. In experiments we evaluate how well the methods recover spectra and predict the actual RGBs and RGBs under different viewing conditions (changing illuminations and/or cameras). The results show that our method generally improves the state-of-the-art spectral recovery (with more stabilized performance when exposure varies) and provides zero colorimetric error. Moreover, our method significantly improves the color fidelity under different viewing conditions, with up to a 60% reduction in some cases.

8.
J Opt Soc Am A Opt Image Sci Vis ; 33(8): 1579-88, 2016 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-27505656

RESUMO

Estimation of individual spectral cone fundamentals from color-matching functions is a classical and longstanding problem in color science. In this paper we propose a novel method to carry out this estimation based on a linear optimization technique, employing an assumption of a priori knowledge of the retinal absorptance functions. The result is an estimation of the combined lenticular and macular filtration for an individual, along with the nine coefficients in the linear combination that relates their color-matching functions to their estimated spectral-cone fundamentals. We test the method on the individual Stiles and Burch color-matching functions and derive cone-fundamental estimations for different viewing fields and matching experiment repetition. We obtain cone-fundamental estimations that are remarkably similar to those available in the literature. This suggests that the method yields results that are close to the true fundamentals.


Assuntos
Percepção de Cores/fisiologia , Células Fotorreceptoras Retinianas Cones/citologia , Modelos Lineares
9.
J Opt Soc Am A Opt Image Sci Vis ; 32(12): 2384-96, 2015 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-26831392

RESUMO

This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

10.
J Imaging ; 9(10)2023 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-37888321

RESUMO

Colour correction is the process of converting RAW RGB pixel values of digital cameras to a standard colour space such as CIE XYZ. A range of regression methods including linear, polynomial and root-polynomial least-squares have been deployed. However, in recent years, various neural network (NN) models have also started to appear in the literature as an alternative to classical methods. In the first part of this paper, a leading neural network approach is compared and contrasted with regression methods. We find that, although the neural network model supports improved colour correction compared with simple least-squares regression, it performs less well than the more advanced root-polynomial regression. Moreover, the relative improvement afforded by NNs, compared to linear least-squares, is diminished when the regression methods are adapted to minimise a perceptual colour error. Problematically, unlike linear and root-polynomial regressions, the NN approach is tied to a fixed exposure (and when exposure changes, the afforded colour correction can be quite poor). We explore two solutions that make NNs more exposure-invariant. First, we use data augmentation to train the NN for a range of typical exposures and second, we propose a new NN architecture which, by construction, is exposure-invariant. Finally, we look into how the performance of these algorithms is influenced when models are trained and tested on different datasets. As expected, the performance of all methods drops when tested with completely different datasets. However, we noticed that the regression methods still outperform the NNs in terms of colour correction, even though the relative performance of the regression methods does change based on the train and test datasets.

11.
J Opt Soc Am A Opt Image Sci Vis ; 29(7): 1199-210, 2012 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-22751384

RESUMO

There are many works in color that assume illumination change can be modeled by multiplying sensor responses by individual scaling factors. The early research in this area is sometimes grouped under the heading "von Kries adaptation": the scaling factors are applied to the cone responses. In more recent studies, both in psychophysics and in computational analysis, it has been proposed that scaling factors should be applied to linear combinations of the cones that have narrower support: they should be applied to the so-called "sharp sensors." In this paper, we generalize the computational approach to spectral sharpening in three important ways. First, we introduce spherical sampling as a tool that allows us to enumerate in a principled way all linear combinations of the cones. This allows us to, second, find the optimal sharp sensors that minimize a variety of error measures including CIE Delta E (previous work on spectral sharpening minimized RMS) and color ratio stability. Lastly, we extend the spherical sampling paradigm to the multispectral case. Here the objective is to model the interaction of light and surface in terms of color signal spectra. Spherical sampling is shown to improve on the state of the art.

12.
J Vis ; 12(6): 7, 2012 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-22665457

RESUMO

When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data.


Assuntos
Percepção de Cores/fisiologia , Visão de Cores/fisiologia , Modelos Neurológicos , Psicofísica/métodos , Cor , Humanos , Iluminação , Estimulação Luminosa/métodos , Valor Preditivo dos Testes , Limiar Sensorial/fisiologia
13.
J Opt Soc Am A Opt Image Sci Vis ; 28(8): 1677-88, 2011 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-21811330

RESUMO

Brownian motion is a random process that finds application in many fields, and its relation to certain color perception phenomena has recently been observed. On this ground, Marini and Rizzi developed a retinex algorithm based on Brownian motion paths. However, while their approach has several advantages and delivers interesting results, it has a high computational complexity. We propose an efficient algorithm that generates pseudo-Brownian paths with a very important constraint: we can guarantee a lower bound to the number of visits to each pixel, as well as its average. Despite these constraints, we show that the paths generated have certain statistical similarities to random walk and Brownian motion. Finally, we present a retinex implementation that exploits the paths generated with our algorithm, and we compare some images it generates with those obtained with the McCann99 and Frankle and McCann's algorithms (two multiscale retinex implementations that have a low computational complexity). We find that our approach causes fewer artifacts and tends to require a smaller number of pixel comparisons to achieve similar results, thus compensating for the slightly higher computational complexity.

14.
IEEE Trans Image Process ; 30: 853-867, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33226947

RESUMO

When we place a colored filter in front of a camera the effective camera response functions are equal to the given camera spectral sensitivities multiplied by the filter spectral transmittance. In this article, we solve for the filter which returns the modified sensitivities as close to being a linear transformation from the color matching functions of the human visual system as possible. When this linearity condition - sometimes called the Luther condition- is approximately met, the 'camera+filter' system can be used for accurate color measurement. Then, we reformulate our filter design optimisation for making the sensor responses as close to the CIEXYZ tristimulus values as possible given the knowledge of real measured surfaces and illuminants spectra data. This data-driven method in turn is extended to incorporate constraints on the filter (smoothness and bounded transmission). Also, because how the optimisation is initialised is shown to impact on the performance of the solved-for filters, a multi-initialisation optimisation is developed. Experiments demonstrate that, by taking pictures through our optimised color filters, we can make cameras significantly more colorimetric.

15.
IEEE Trans Pattern Anal Mach Intell ; 42(5): 1286-1287, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31265383

RESUMO

The ColorChecker dataset is one of the most widely used image sets for evaluating and ranking illuminant estimation algorithms. However, this single set of images has at least 3 different sets of ground-truth (i.e., correct answers) associated with it. In the literature it is often asserted that one algorithm is better than another when the algorithms in question have been tuned and tested with the different ground-truths. In this short correspondence we present some of the background as to why the 3 existing ground-truths are different and go on to make a new single and recommended set of correct answers. Experiments reinforce the importance of this work in that we show that the total ordering of a set of algorithms may be reversed depending on whether we use the new or legacy ground-truth data.

16.
Interface Focus ; 8(4): 20180008, 2018 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-29951188

RESUMO

In computer vision, illumination is considered to be a problem that needs to be 'solved'. The colour cast due to illumination is removed to support colour-based image recognition and stable tracking (in and out of shadows), among other tasks. In this paper, I review historical and current algorithms for illumination estimation. In the classical approach, the illuminant colour is estimated by an ever more sophisticated analysis of simple image summary statistics often followed by a bias correction step. Bias correction is a function applied to the estimates made by a given illumination estimation algorithm to correct consistent errors in the estimations. Most recently, the full power, and much higher complexity, of deep learning has been deployed (where, effectively, the definition of the image statistics of interest and the type of analysis carried out are found as part of an overall optimization). In this paper, I challenge the orthodoxy of deep learning, i.e. that it is the best approach for illuminant estimation. We instead focus on the final bias correction stage found in many simple illumination estimation algorithms. There are two key insights in our method. First, we argue that the bias must be corrected in an exposure invariant way. Second, we show that this bias correction amounts to 'solving for a homography'. Homography-based illuminant estimation is shown to deliver leading illumination estimation performance (at a very small fraction of the complexity of deep learning methods).

17.
IEEE Trans Pattern Anal Mach Intell ; 39(7): 1482-1488, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-27333601

RESUMO

The angle between the RGBs of the measured illuminant and estimated illuminant colors-the recovery angular error-has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are `divided out' from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are `divided out'. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established.

18.
IEEE Trans Pattern Anal Mach Intell ; 28(1): 59-68, 2006 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16402619

RESUMO

This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Armazenamento e Recuperação da Informação/métodos
19.
IEEE Trans Image Process ; 24(5): 1460-70, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25769139

RESUMO

Cameras record three color responses (RGB) which are device dependent. Camera coordinates are mapped to a standard color space, such as XYZ-useful for color measurement-by a mapping function, e.g., the simple 3×3 linear transform (usually derived through regression). This mapping, which we will refer to as linear color correction (LCC), has been demonstrated to work well in the number of studies. However, it can map [Formula: see text] to XYZs with high error. The advantage of the LCC is that it is independent of camera exposure. An alternative and potentially more powerful method for color correction is polynomial color correction (PCC). Here, the R, G, and B values at a pixel are extended by the polynomial terms. For a given calibration training set PCC can significantly reduce the colorimetric error. However, the PCC fit depends on exposure, i.e., as exposure changes the vector of polynomial components is altered in a nonlinear way which results in hue and saturation shifts. This paper proposes a new polynomial-type regression loosely related to the idea of fractional polynomials which we call root-PCC (RPCC). Our idea is to take each term in a polynomial expansion and take its k th root of each k -degree term. It is easy to show terms defined in this way scale with exposure. RPCC is a simple (low complexity) extension of LCC. The experiments presented in this paper demonstrate that RPCC enhances color correction performance on real and synthetic data.

20.
PLoS One ; 9(2): e87989, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24586299

RESUMO

The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.


Assuntos
Percepção de Cores/fisiologia , Cor , Discriminação Psicológica/fisiologia , Luz , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA