Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Animal ; 18(3): 101079, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38377806

RESUMEN

Biometrics methods, which currently identify humans, can potentially identify dairy cows. Given that animal movements cannot be easily controlled, identification accuracy and system robustness are challenging when deploying an animal biometrics recognition system on a real farm. Our proposed method performs multiple-cow face detection and face classification from videos by adjusting recent state-of-the-art deep-learning methods. As part of this study, a system was designed and installed at four meters above a feeding zone at the Volcani Institute's dairy farm. Two datasets were acquired and annotated, one for facial detection and the second for facial classification of 77 cows. We achieved for facial detection a mean average precision (at Intersection over Union of 0.5) of 97.8% using the YOLOv5 algorithm, and facial classification accuracy of 96.3% using a Vision-Transformer model with a unique loss-function borrowed from human facial recognition. Our combined system can process video frames with 10 cows' faces, localize their faces, and correctly classify their identities in less than 20 ms per frame. Thus, up to 50 frames per second video files can be processed with our system in real-time at a dairy farm. Our method efficiently performs real-time facial detection and recognition on multiple cow faces using deep neural networks, achieving a high precision in real-time operation. These qualities can make the proposed system a valuable tool for an automatic biometric cow recognition on farms.


Asunto(s)
Identificación Biométrica , Reconocimiento Facial , Femenino , Bovinos , Humanos , Animales , Granjas , Identificación Biométrica/métodos , Redes Neurales de la Computación , Algoritmos , Industria Lechera/métodos
2.
Animal ; 14(12): 2628-2634, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32662766

RESUMEN

Manually counting hens in battery cages on large commercial poultry farms is a challenging task: time-consuming and often inaccurate. Therefore, the aim of this study was to develop a machine vision system that automatically counts the number of hens in battery cages. Automatically counting hens can help a regulatory agency or inspecting officer to estimate the number of living birds in a cage and, thus animal density, to ensure that they conform to government regulations or quality certification requirements. The test hen house was 87 m long, containing 37 battery cages stacked in 6-story high rows on both sides of the structure. Each cage housed 18 to 30 hens, for a total of approximately 11 000 laying hens. A feeder moves along the cages. A camera was installed on an arm connected to the feeder, which was specifically developed for this purpose. A wide-angle lens was used in order to frame an entire cage in the field of view. Detection and tracking algorithms were designed to detect hens in cages; the recorded videos were first processed using a convolutional neural network (CNN) object detection algorithm called Faster R-CNN, with an input of multi-angular view shifted images. After the initial detection, the hens' relative location along the feeder was tracked and saved using a tracking algorithm. Information was added with every additional frame, as the camera arm moved along the cages. The algorithm count was compared with that made by a human observer (the 'gold standard'). A validation dataset of about 2000 images achieved 89.6% accuracy at cage level, with a mean absolute error of 2.5 hens per cage. These results indicate that the model developed in this study is practicable for obtaining fairly good estimates of the number of laying hens in battery cages.


Asunto(s)
Vivienda para Animales , Oviposición , Animales , Pollos
3.
Appl Opt ; 38(20): 4325-32, 1999 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-18323918

RESUMEN

Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA