Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Opt Soc Am A Opt Image Sci Vis ; 36(1): 96-104, 2019 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-30645345

RESUMO

In this paper, we propose two methods of calculating theoretically maximal metamer mismatch volumes. Unlike prior art techniques, our methods do not make any assumptions on the shape of spectra on the boundary of the mismatch volumes. Both methods utilize a spherical sampling approach, but they calculate mismatch volumes in two different ways. The first method uses a linear programming optimization, while the second is a computational geometry approach based on half-space intersection. We show that under certain conditions the theoretically maximal metamer mismatch volume is significantly larger than the one approximated using a prior art method.

2.
Plant Methods ; 13: 117, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29299051

RESUMO

BACKGROUND: Plants demonstrate dynamic growth phenotypes that are determined by genetic and environmental factors. Phenotypic analysis of growth features over time is a key approach to understand how plants interact with environmental change as well as respond to different treatments. Although the importance of measuring dynamic growth traits is widely recognised, available open software tools are limited in terms of batch image processing, multiple traits analyses, software usability and cross-referencing results between experiments, making automated phenotypic analysis problematic. RESULTS: Here, we present Leaf-GP (Growth Phenotypes), an easy-to-use and open software application that can be executed on different computing platforms. To facilitate diverse scientific communities, we provide three software versions, including a graphic user interface (GUI) for personal computer (PC) users, a command-line interface for high-performance computer (HPC) users, and a well-commented interactive Jupyter Notebook (also known as the iPython Notebook) for computational biologists and computer scientists. The software is capable of extracting multiple growth traits automatically from large image datasets. We have utilised it in Arabidopsis thaliana and wheat (Triticum aestivum) growth studies at the Norwich Research Park (NRP, UK). By quantifying a number of growth phenotypes over time, we have identified diverse plant growth patterns between different genotypes under several experimental conditions. As Leaf-GP has been evaluated with noisy image series acquired by different imaging devices (e.g. smartphones and digital cameras) and still produced reliable biological outputs, we therefore believe that our automated analysis workflow and customised computer vision based feature extraction software implementation can facilitate a broader plant research community for their growth and development studies. Furthermore, because we implemented Leaf-GP based on open Python-based computer vision, image analysis and machine learning libraries, we believe that our software not only can contribute to biological research, but also demonstrates how to utilise existing open numeric and scientific libraries (e.g. Scikit-image, OpenCV, SciPy and Scikit-learn) to build sound plant phenomics analytic solutions, in a efficient and effective way. CONCLUSIONS: Leaf-GP is a sophisticated software application that provides three approaches to quantify growth phenotypes from large image series. We demonstrate its usefulness and high accuracy based on two biological applications: (1) the quantification of growth traits for Arabidopsis genotypes under two temperature conditions; and (2) measuring wheat growth in the glasshouse over time. The software is easy-to-use and cross-platform, which can be executed on Mac OS, Windows and HPC, with open Python-based scientific libraries preinstalled. Our work presents the advancement of how to integrate computer vision, image analysis, machine learning and software engineering in plant phenomics software implementation. To serve the plant research community, our modulated source code, detailed comments, executables (.exe for Windows; .app for Mac), and experimental results are freely available at https://github.com/Crop-Phenomics-Group/Leaf-GP/releases.

3.
J Opt Soc Am A Opt Image Sci Vis ; 33(11): 2166-2177, 2016 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-27857433

RESUMO

Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.

4.
J Opt Soc Am A Opt Image Sci Vis ; 33(4): 589-99, 2016 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-27140768

RESUMO

In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

5.
J Opt Soc Am A Opt Image Sci Vis ; 32(3): 381-91, 2015 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26366649

RESUMO

In this article, we describe a spectral sensitivity measurement procedure at the National Physical Laboratory, London, with the aim of obtaining ground truth spectral sensitivity functions for Nikon D5100 and Sigma SD1 Merill cameras. The novelty of our data is that the potential measurement errors are estimated at each wavelength. We determine how well the measured spectral sensitivity functions represent the actual camera sensitivity functions (as a function of wavelength). The second contribution of this paper is to test the performance of various leading sensor estimation techniques implemented from the literature using measured and synthetic data and also evaluate them based on ground truth data for the two cameras. We conclude that the estimation techniques tested are not sufficiently accurate when compared with our measured ground truth data and that there remains significant scope to improve estimation algorithms for spectral estimation. To help in this endeavor, we will make all our data available online for the community.

6.
IEEE Trans Image Process ; 24(5): 1460-70, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25769139

RESUMO

Cameras record three color responses (RGB) which are device dependent. Camera coordinates are mapped to a standard color space, such as XYZ-useful for color measurement-by a mapping function, e.g., the simple 3×3 linear transform (usually derived through regression). This mapping, which we will refer to as linear color correction (LCC), has been demonstrated to work well in the number of studies. However, it can map [Formula: see text] to XYZs with high error. The advantage of the LCC is that it is independent of camera exposure. An alternative and potentially more powerful method for color correction is polynomial color correction (PCC). Here, the R, G, and B values at a pixel are extended by the polynomial terms. For a given calibration training set PCC can significantly reduce the colorimetric error. However, the PCC fit depends on exposure, i.e., as exposure changes the vector of polynomial components is altered in a nonlinear way which results in hue and saturation shifts. This paper proposes a new polynomial-type regression loosely related to the idea of fractional polynomials which we call root-PCC (RPCC). Our idea is to take each term in a polynomial expansion and take its k th root of each k -degree term. It is easy to show terms defined in this way scale with exposure. RPCC is a simple (low complexity) extension of LCC. The experiments presented in this paper demonstrate that RPCC enhances color correction performance on real and synthetic data.

7.
J Opt Soc Am A Opt Image Sci Vis ; 31(7): 1577-87, 2014 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-25121446

RESUMO

Solid state lighting is becoming a popular light source for color vision experiments. One of the advantages of light emitting diodes (LEDs) is the possibility to shape the target light spectrum according to the experimenter's needs. In this paper, we present a method for creating metameric lights with an LED-based spectrally tunable illuminator. The equipment we use consists of six Gamma Scientific RS-5B lamps, each containing nine different LEDs and a 1 m integrating sphere. We provide a method for describing the (almost) entire set of illuminant metamers. It will be shown that the main difficulty in describing this set arises as the result of the intensity dependent peak-wavelength shift, which is manifested by the majority of the LEDs used by the illuminators of this type. We define the normalized metamer set describing all illuminator spectra that colorimetrically match a given chromaticity. Finally, we describe a method for choosing the smoothest or least smooth metamer from the entire set.


Assuntos
Cor , Iluminação/instrumentação , Modelos Teóricos , Refratometria/instrumentação , Semicondutores , Simulação por Computador , Desenho Assistido por Computador , Desenho de Equipamento , Análise de Falha de Equipamento , Luz , Espalhamento de Radiação
8.
PLoS One ; 9(2): e87989, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24586299

RESUMO

The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.


Assuntos
Percepção de Cores/fisiologia , Cor , Discriminação Psicológica/fisiologia , Luz , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
9.
IEEE Trans Med Imaging ; 27(12): 1769-81, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19033093

RESUMO

This paper describes the use of color image analysis to automatically discriminate between oesophagus, stomach, small intestine, and colon tissue in wireless capsule endoscopy (WCE). WCE uses "pill-cam" technology to recover color video imagery from the entire gastrointestinal tract. Accurately reviewing and reporting this data is a vital part of the examination, but it is tedious and time consuming. Automatic image analysis tools play an important role in supporting the clinician and speeding up this process. Our approach first divides the WCE image into subimages and rejects all subimages in which tissue is not clearly visible. We then create a feature vector combining color, texture, and motion information of the entire image and valid subimages. Color features are derived from hue saturation histograms, compressed using a hybrid transform, incorporating the discrete cosine transform and principal component analysis. A second feature combining color and texture information is derived using local binary patterns. The video is segmented into meaningful parts using support vector or multivariate Gaussian classifiers built within the framework of a hidden Markov model. We present experimental results that demonstrate the effectiveness of this method.


Assuntos
Endoscopia por Cápsula/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Trato Gastrointestinal Inferior/anatomia & histologia , Trato Gastrointestinal Superior/anatomia & histologia , Algoritmos , Cápsulas Endoscópicas , Cor , Colorimetria , Compressão de Dados/métodos , Humanos , Cadeias de Markov , Distribuição Normal , Reconhecimento Automatizado de Padrão/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...