Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Rep ; 10(1): 17557, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-33067502

RESUMO

The digestive health of cows is one of the primary factors that determine their well-being and productivity. Under- and over-feeding are both commonplace in the beef and dairy industry; leading to welfare issues, negative environmental impacts, and economic losses. Unfortunately, digestive health is difficult for farmers to routinely monitor in large farms due to many factors including the need to transport faecal samples to a laboratory for compositional analysis. This paper describes a novel means for monitoring digestive health via a low-cost and easy to use imaging device based on computer vision. The method involves the rapid capture of multiple visible and near-infrared images of faecal samples. A novel three-dimensional analysis algorithm is then applied to objectively score the condition of the sample based on its geometrical features. While there is no universal ground truth for comparison of results, the order of scores matched a qualitative human prediction very closely. The algorithm is also able to detect the presence of undigested fibres and corn kernels using a deep learning approach. Detection rates for corn and fibre in image regions were of the order 90%. These results indicate the potential to develop this system for on-farm, real time monitoring of the digestive health of individual animals, allowing early intervention to effectively adjust feeding strategy.


Assuntos
Criação de Animais Domésticos/instrumentação , Criação de Animais Domésticos/métodos , Fezes , Algoritmos , Ração Animal/análise , Bem-Estar do Animal , Animais , Comportamento Animal , Calibragem , Bovinos , Indústria de Laticínios , Aprendizado Profundo , Fazendas , Processamento de Imagem Assistida por Computador/métodos , Gado , Software , Espectroscopia de Luz Próxima ao Infravermelho
2.
Gigascience ; 8(5)2019 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-31127811

RESUMO

BACKGROUND: Tracking and predicting the growth performance of plants in different environments is critical for predicting the impact of global climate change. Automated approaches for image capture and analysis have allowed for substantial increases in the throughput of quantitative growth trait measurements compared with manual assessments. Recent work has focused on adopting computer vision and machine learning approaches to improve the accuracy of automated plant phenotyping. Here we present PS-Plant, a low-cost and portable 3D plant phenotyping platform based on an imaging technique novel to plant phenotyping called photometric stereo (PS). RESULTS: We calibrated PS-Plant to track the model plant Arabidopsis thaliana throughout the day-night (diel) cycle and investigated growth architecture under a variety of conditions to illustrate the dramatic effect of the environment on plant phenotype. We developed bespoke computer vision algorithms and assessed available deep neural network architectures to automate the segmentation of rosettes and individual leaves, and extract basic and more advanced traits from PS-derived data, including the tracking of 3D plant growth and diel leaf hyponastic movement. Furthermore, we have produced the first PS training data set, which includes 221 manually annotated Arabidopsis rosettes that were used for training and data analysis (1,768 images in total). A full protocol is provided, including all software components and an additional test data set. CONCLUSIONS: PS-Plant is a powerful new phenotyping tool for plant research that provides robust data at high temporal and spatial resolutions. The system is well-suited for small- and large-scale research and will help to accelerate bridging of the phenotype-to-genotype gap.


Assuntos
Aprendizado Profundo , Imageamento Tridimensional/métodos , Fotometria/métodos , Desenvolvimento Vegetal , Arabidopsis , Imageamento Tridimensional/economia , Imageamento Tridimensional/normas , Fenótipo , Fotometria/economia , Fotometria/normas
3.
J Opt Soc Am A Opt Image Sci Vis ; 33(3): 314-25, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26974900

RESUMO

This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.


Assuntos
Computadores , Movimentos Oculares , Reconhecimento Automatizado de Padrão/métodos , Humanos , Aprendizado de Máquina não Supervisionado
4.
J Opt Soc Am A Opt Image Sci Vis ; 33(3): 333-44, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26974902

RESUMO

This paper seeks to compare encoded features from both two-dimensional (2D) and three-dimensional (3D) face images in order to achieve automatic gender recognition with high accuracy and robustness. The Fisher vector encoding method is employed to produce 2D, 3D, and fused features with escalated discriminative power. For 3D face analysis, a two-source photometric stereo (PS) method is introduced that enables 3D surface reconstructions with accurate details as well as desirable efficiency. Moreover, a 2D+3D imaging device, taking the two-source PS method as its core, has been developed that can simultaneously gather color images for 2D evaluations and PS images for 3D analysis. This system inherits the superior reconstruction accuracy from the standard (three or more light) PS method but simplifies the reconstruction algorithm as well as the hardware design by only requiring two light sources. It also offers great potential for facilitating human computer interaction by being accurate, cheap, efficient, and nonintrusive. Ten types of low-level 2D and 3D features have been experimented with and encoded for Fisher vector gender recognition. Evaluations of the Fisher vector encoding method have been performed on the FERET database, Color FERET database, LFW database, and FRGCv2 database, yielding 97.7%, 98.0%, 92.5%, and 96.7% accuracy, respectively. In addition, the comparison of 2D and 3D features has been drawn from a self-collected dataset, which is constructed with the aid of the 2D+3D imaging device in a series of data capture experiments. With a variety of experiments and evaluations, it can be proved that the Fisher vector encoding method outperforms most state-of-the-art gender recognition methods. It has also been observed that 3D features reconstructed by the two-source PS method are able to further boost the Fisher vector gender recognition performance, i.e., up to a 6% increase on the self-collected database.


Assuntos
Face , Imageamento Tridimensional , Reconhecimento Automatizado de Padrão/métodos , Fatores Sexuais , Bases de Dados Factuais , Feminino , Humanos , Masculino
5.
J Opt Soc Am A Opt Image Sci Vis ; 30(3): 278-86, 2013 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-23456103

RESUMO

This paper proposes and describes an implementation of a photometric stereo-based technique for in vivo assessment of three-dimensional (3D) skin topography in the presence of interreflections. The proposed method illuminates skin with red, green, and blue colored lights and uses the resulting variation in surface gradients to mitigate the effects of interreflections. Experiments were carried out on Caucasian, Asian, and African American subjects to demonstrate the accuracy of our method and to validate the measurements produced by our system. Our method produced significant improvement in 3D surface reconstruction for all Caucasian, Asian, and African American skin types. The results also illustrate the differences in recovered skin topography due to the nondiffuse bidirectional reflectance distribution function (BRDF) for each color illumination used, which also concur with the existing multispectral BRDF data available for skin.


Assuntos
Imageamento Tridimensional/métodos , Fenômenos Ópticos , Fotometria/métodos , Pele/citologia , Humanos , Envelhecimento da Pele/etnologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA