RESUMO
In this paper, we present the development of a low-cost distributed computing pipeline for cotton plant phenotyping using Raspberry Pi, Hadoop, and deep learning. Specifically, we use a cluster of several Raspberry Pis in a primary-replica distributed architecture using the Apache Hadoop ecosystem and a pre-trained Tiny-YOLOv4 model for cotton bloom detection from our past work. We feed cotton image data collected from a research field in Tifton, GA, into our cluster's distributed file system for robust file access and distributed, parallel processing. We then submit job requests to our cluster from our client to process cotton image data in a distributed and parallel fashion, from pre-processing to bloom detection and spatio-temporal map creation. Additionally, we present a comparison of our four-node cluster performance with centralized, one-, two-, and three-node clusters. This work is the first to develop a distributed computing pipeline for high-throughput cotton phenotyping in field-based agriculture.
Assuntos
Gossypium , Fenótipo , Humanos , Processamento Eletrônico de DadosRESUMO
This paper develops an approach to perform binary semantic segmentation on Arabidopsis thaliana root images for plant root phenotyping using a conditional generative adversarial network (cGAN) to address pixel-wise class imbalance. Specifically, we use Pix2PixHD, an image-to-image translation cGAN, to generate realistic and high resolution images of plant roots and annotations similar to the original dataset. Furthermore, we use our trained cGAN to triple the size of our original root dataset to reduce pixel-wise class imbalance. We then feed both the original and generated datasets into SegNet to semantically segment the root pixels from the background. Furthermore, we postprocess our segmentation results to close small, apparent gaps along the main and lateral roots. Lastly, we present a comparison of our binary semantic segmentation approach with the state-of-the-art in root segmentation. Our efforts demonstrate that cGAN can produce realistic and high resolution root images, reduce pixel-wise class imbalance, and our segmentation model yields high testing accuracy (of over 99%), low cross entropy error (of less than 2%), high Dice Score (of near 0.80), and low inference time for near real-time processing.
Assuntos
Arabidopsis , Fenômenos Biológicos , Processamento de Imagem Assistida por Computador/métodos , Semântica , Raízes de PlantasRESUMO
The use of deep neural networks (DNNs) in plant phenotyping has recently received considerable attention. By using DNNs, valuable insights into plant traits can be readily achieved. While these networks have made considerable advances in plant phenotyping, the results are processed too slowly to allow for real-time decision-making. Therefore, being able to perform plant phenotyping computations in real-time has become a critical part of precision agriculture and agricultural informatics. In this work, we utilize state-of-the-art object detection networks to accurately detect, count, and localize plant leaves in real-time. Our work includes the creation of an annotated dataset of Arabidopsis plants captured using Cannon Rebel XS camera. These images and annotations have been complied and made publicly available. This dataset is then fed into a Tiny-YOLOv3 network for training. The Tiny-YOLOv3 network is then able to converge and accurately perform real-time localization and counting of the leaves. We also create a simple robotics platform based on an Android phone and iRobot create2 to demonstrate the real-time capabilities of the network in the greenhouse. Additionally, a performance comparison is conducted between Tiny-YOLOv3 and Faster R-CNN. Unlike Tiny-YOLOv3, which is a single network that does localization and identification in a single pass, the Faster R-CNN network requires two steps to do localization and identification. While with Tiny-YOLOv3, inference time, F1 Score, and false positive rate (FPR) are improved compared to Faster R-CNN, other measures such as difference in count (DiC) and AP are worsened. Specifically, for our implementation of Tiny-YOLOv3, the inference time is under 0.01 s, the F1 Score is over 0.94, and the FPR is around 24%. Last, transfer learning using Tiny-YOLOv3 to detect larger leaves on a model trained only on smaller leaves is implemented. The main contributions of the paper are in creating dataset (shared with the research community), as well as the trained Tiny-YOLOv3 network for leaf localization and counting.