Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
5.
Front Plant Sci ; 13: 1003243, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36247590

RESUMEN

The precision spray of liquid fertilizer and pesticide to plants is an important task for agricultural robots in precision agriculture. By reducing the amount of chemicals being sprayed, it brings in a more economic and eco-friendly solution compared to conventional non-discriminated spray. The prerequisite of precision spray is to detect and track each plant. Conventional detection or segmentation methods detect all plants in the image captured under the robotic platform, without knowing the ID of the plant. To spray pesticides to each plant exactly once, tracking of every plant is needed in addition to detection. In this paper, we present LettuceTrack, a novel Multiple Object Tracking (MOT) method to simultaneously detect and track lettuces. When the ID of each plant is obtained from the tracking method, the robot knows whether a plant has been sprayed before therefore it will only spray the plant that has not been sprayed. The proposed method adopts YOLO-V5 for detection of the lettuces, and a novel plant feature extraction and data association algorithms are introduced to effectively track all plants. The proposed method can recover the ID of a plant even if the plant moves out of the field of view of camera before, for which existing Multiple Object Tracking (MOT) methods usually fail and assign a new plant ID. Experiments are conducted to show the effectiveness of the proposed method, and a comparison with four state-of-the-art Multiple Object Tracking (MOT) methods is shown to prove the superior performance of the proposed method in the lettuce tracking application and its limitations. Though the proposed method is tested with lettuce, it can be potentially applied to other vegetables such as broccoli or sugar beat.

6.
Animals (Basel) ; 11(11)2021 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-34827766

RESUMEN

The growing world population has increased the demand for animal-sourced protein. However, animal farming productivity is faced with challenges from traditional farming practices, socioeconomic status, and climate change. In recent years, smart sensors, big data, and deep learning have been applied to animal welfare measurement and livestock farming applications, including behaviour recognition and health monitoring. In order to facilitate research in this area, this review summarises and analyses some main techniques used in smart livestock farming, focusing on those related to cattle lameness detection and behaviour recognition. In this study, more than 100 relevant papers on cattle lameness detection and behaviour recognition have been evaluated and discussed. Based on a review and a comparison of recent technologies and methods, we anticipate that intelligent perception for cattle behaviour and welfare monitoring will develop towards standardisation, a larger scale, and intelligence, combined with Internet of things (IoT) and deep learning technologies. In addition, the key challenges and opportunities of future research are also highlighted and discussed.

7.
Sensors (Basel) ; 18(6)2018 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-29890686

RESUMEN

Hyperspectral line-scan cameras are increasingly being deployed on mobile platforms operating in unstructured environments. To generate geometrically accurate hyperspectral composites, the intrinsic parameters of these cameras must be resolved. This article describes a method for determining the intrinsic parameters of a hyperspectral line-scan camera. The proposed method is based on a cross-ratio invariant calibration routine and is able to estimate the focal length, principal point, and radial distortion parameters in a hyperspectral line-scan camera. Compared to previous methods that use similar calibration targets, our approach extends the camera model to include radial distortion. It is able to utilize calibration data recorded from multiple camera view angles by optimizing the re-projection error of all calibration data jointly. The proposed method also includes an additional signal processing step that automatically detects calibration points in hyperspectral imagery of the calibration target. These contributions result in accurate estimates of the intrinsic parameters with minimal supervision. The proposed method is validated through comprehensive simulation and demonstrated on real hyperspectral line-scans.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA