Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(11): e32297, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38947432

RESUMO

The authentication process involves all the supply chain stakeholders, and it is also adopted to verify food quality and safety. Food authentication tools are an essential part of traceability systems as they provide information on the credibility of origin, species/variety identity, geographical provenance, production entity. Moreover, these systems are useful to evaluate the effect of transformation processes, conservation strategies and the reliability of packaging and distribution flows on food quality and safety. In this manuscript, we identified the innovative characteristics of food authentication systems to respond to market challenges, such as the simplification, the high sensitivity, and the non-destructive ability during authentication procedures. We also discussed the potential of the current identification systems based on molecular markers (chemical, biochemical, genetic) and the effectiveness of new technologies with reference to the miniaturized systems offered by nanotechnologies, and computer vision systems linked to artificial intelligence processes. This overview emphasizes the importance of convergent technologies in food authentication, to support molecular markers with the technological innovation offered by emerging technologies derived from biotechnologies and informatics. The potential of these strategies was evaluated on real examples of high-value food products. Technological innovation can therefore strengthen the system of molecular markers to meet the current market needs; however, food production processes are in profound evolution. The food 3D-printing and the introduction of new raw materials open new challenges for food authentication and this will require both an update of the current regulatory framework, as well as the development and adoption of new analytical systems.

2.
J Opt Soc Am A Opt Image Sci Vis ; 41(2): 185-194, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38437331

RESUMO

Multispectral imaging is a technique that captures data across several bands of the light spectrum, and it can be useful in many computer vision fields, including color constancy. We propose a method that exploits multispectral imaging for illuminant estimation, and then applies illuminant correction in the raw RGB domain to achieve computational color constancy. Our proposed method is composed of two steps: first, a selected number of existing camera-independent algorithms for illuminant estimation, originally designed for RGB data, are applied in generalized form to work with multispectral data. We demonstrate that the sole multispectral extension of such algorithms is not sufficient to achieve color constancy, and thus we introduce a second step, in which we re-elaborate the multispectral estimations before conversion into raw RGB with the use of the camera response function. Our results on the NUS dataset show that an improvement of 60% in the color constancy performance, measured in terms of reproduction angular error, can be obtained according to our method when compared to the traditional raw RGB pipeline.

3.
J Opt Soc Am A Opt Image Sci Vis ; 41(3): 516-526, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38437443

RESUMO

We introduce a method that enhances RGB color constancy accuracy by combining neural network and k-means clustering techniques. Our approach stands out from previous works because we combine multispectral and color information together to estimate illuminants. Furthermore, we investigate the combination of the illuminant estimation in the RGB color and in the spectral domains, as a strategy to provide a refined estimation in the RGB color domain. Our investigation can be divided into three main points: (1) identify the spatial resolution for sampling the input image in terms of RGB color and spectral information that brings the highest performance; (2) determine whether it is more effective to predict the illuminant in the spectral or in the RGB color domain, and finally, (3) assuming that the illuminant is in fact predicted in the spectral domain, investigate if it is better to have a loss function defined in the RGB color or spectral domain. Experimental results are carried out on NUS: a standard dataset of multispectral radiance images with an annotated spectral global illuminant. Among the several considered options, the best results are obtained with a model trained to predict the illuminant in the spectral domain using an RGB color loss function. In terms of comparison with the state of the art, this solution improves the recovery angular error metric by 66% compared to the best tested spectral method, and by 41% compared to the best tested RGB method.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37027541

RESUMO

Full-reference image quality measures are a fundamental tool to approximate the human visual system in various applications for digital data management: from retrieval to compression to detection of unauthorized uses. Inspired by both the effectiveness and the simplicity of hand-crafted Structural Similarity Index Measure (SSIM), in this work, we present a framework for the formulation of SSIM-like image quality measures through genetic programming. We explore different terminal sets, defined from the building blocks of structural similarity at different levels of abstraction, and we propose a two-stage genetic optimization that exploits hoist mutation to constrain the complexity of the solutions. Our optimized measures are selected through a cross-dataset validation procedure, which results in superior performance against different versions of structural similarity, measured as correlation with human mean opinion scores. We also demonstrate how, by tuning on specific datasets, it is possible to obtain solutions that are competitive with (or even outperform) more complex image quality measures.

5.
J Imaging ; 8(9)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36135398

RESUMO

The angle-retaining color space (ARC) and the corresponding chromaticity diagram encode information following a cylindrical color model. Their main property is that angular distances in RGB are mapped into Euclidean distances in the ARC chromatic components, making the color space suitable for data representation in the domain of color constancy. In this paper, we present an in-depth analysis of various properties of ARC: we document the variations in the numerical precisions of two alternative formulations of the ARC-to-RGB transformation and characterize how various perturbations in RGB impact the ARC representation. This was done empirically for the ARC diagram in a direct comparison against other commonly used chromaticity diagrams, and analytically for the ARC space with respect to its three components. We conclude by describing the color space in terms of perceptual uniformity, suggesting the need for new perceptual color metrics.

6.
J Opt Soc Am A Opt Image Sci Vis ; 38(9): 1349-1356, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34613142

RESUMO

Computational color constancy algorithms are commonly evaluated only through angular error analysis on annotated datasets of static images. The widespread use of videos in consumer devices motivated us to define a richer methodology for color constancy evaluation. To this extent, temporal and spatial stability are defined here to determine the degree of sensitivity of color constancy algorithms to variations in the scene that do not depend on the illuminant source, such as moving subjects or a moving camera. Our evaluation methodology is applied to compare several color constancy algorithms on stable sequences belonging to the Gray Ball and Burst Color Constancy video datasets. The stable sequences, identified using a general-purpose procedure, are made available for public download to encourage future research. Our investigation proves the importance of evaluating color constancy algorithms according to multiple metrics, instead of angular error alone. For example, the popular fully convolutional color constancy with confidence-weighted pooling algorithm is consistently the best performing solution for error evaluation, but it is often surpassed in terms of stability by the traditional gray edge algorithm, and by the more recent sensor-independent illumination estimation algorithm.

7.
Sensors (Basel) ; 21(2)2021 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-33467700

RESUMO

We address the task of classifying car images at multiple levels of detail, ranging from the top-level car type, down to the specific car make, model, and year. We analyze existing datasets for car classification, and identify the CompCars as an excellent starting point for our task. We show that convolutional neural networks achieve an accuracy above 90% on the finest-level classification task. This high performance, however, is scarcely representative of real-world situations, as it is evaluated on a biased training/test split. In this work, we revisit the CompCars dataset by first defining a new training/test split, which better represents real-world scenarios by setting a more realistic baseline at 61% accuracy on the new test set. We also propagate the existing (but limited) type-level annotation to the entire dataset, and we finally provide a car-tight bounding box for each image, automatically defined through an ad hoc car detector. To evaluate this revisited dataset, we design and implement three different approaches to car classification, two of which exploit the hierarchical nature of car annotations. Our experiments show that higher-level classification in terms of car type positively impacts classification at a finer grain, now reaching 70% accuracy. The achieved performance constitutes a baseline benchmark for future research, and our enriched set of annotations is made available for public download.

8.
J Opt Soc Am A Opt Image Sci Vis ; 37(11): 1721-1730, 2020 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-33175748

RESUMO

Color constancy algorithms are typically evaluated with a statistical analysis of the recovery angular error and the reproduction angular error between the estimated and ground truth illuminants. Such analysis provides information about only the magnitude of the errors, and not about their chromatic properties. We propose an Angle-Retaining Chromaticity diagram (ARC) for the visual analysis of the estimated illuminants and the corresponding errors. We provide both quantitative and qualitative proof of the superiority of ARC in preserving angular distances compared to other chromaticity diagrams, making it possible to quantify the reproduction and recovery errors in terms of Euclidean distances on a plane. We present two case studies for the application of the ARC diagram in the visualization of the ground truth illuminants of color constancy datasets, and the visual analysis of error distributions of color constancy algorithms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...