Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Nat Commun ; 13(1): 3559, 2022 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-35729171

RESUMEN

Robotics and autonomous systems are reshaping the world, changing healthcare, food production and biodiversity management. While they will play a fundamental role in delivering the UN Sustainable Development Goals, associated opportunities and threats are yet to be considered systematically. We report on a horizon scan evaluating robotics and autonomous systems impact on all Sustainable Development Goals, involving 102 experts from around the world. Robotics and autonomous systems are likely to transform how the Sustainable Development Goals are achieved, through replacing and supporting human activities, fostering innovation, enhancing remote access and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating environmental change, diverting resources from tried-and-tested solutions and reducing freedom and privacy through inadequate governance. Although predicting future impacts of robotics and autonomous systems on the Sustainable Development Goals is difficult, thoroughly examining technological developments early is essential to prevent unintended detrimental consequences. Additionally, robotics and autonomous systems should be considered explicitly when developing future iterations of the Sustainable Development Goals to avoid reversing progress or exacerbating inequalities.


Asunto(s)
Robótica , Desarrollo Sostenible , Biodiversidad , Conservación de los Recursos Naturales , Objetivos , Humanos
2.
Sci Rep ; 11(1): 23279, 2021 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-34857791

RESUMEN

Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.

4.
F1000Res ; 10: 324, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-36873457

RESUMEN

Artificial Intelligence (AI) is increasingly used within plant science, yet it is far from being routinely and effectively implemented in this domain. Particularly relevant to the development of novel food and agricultural technologies is the development of validated, meaningful and usable ways to integrate, compare and visualise large, multi-dimensional datasets from different sources and scientific approaches. After a brief summary of the reasons for the interest in data science and AI within plant science, the paper identifies and discusses eight key challenges in data management that must be addressed to further unlock the potential of AI in crop and agronomic research, and particularly the application of Machine Learning (AI) which holds much promise for this domain.

6.
Sensors (Basel) ; 20(11)2020 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-32545168

RESUMEN

High-throughput plant phenotyping in controlled environments (growth chambers and glasshouses) is often delivered via large, expensive installations, leading to limited access and the increased relevance of "affordable phenotyping" solutions. We present two robot vectors for automated plant phenotyping under controlled conditions. Using 3D-printed components and readily-available hardware and electronic components, these designs are inexpensive, flexible and easily modified to multiple tasks. We present a design for a thermal imaging robot for high-precision time-lapse imaging of canopies and a Plate Imager for high-throughput phenotyping of roots and shoots of plants grown on media plates. Phenotyping in controlled conditions requires multi-position spatial and temporal monitoring of environmental conditions. We also present a low-cost sensor platform for environmental monitoring based on inexpensive sensors, microcontrollers and internet-of-things (IoT) protocols.


Asunto(s)
Monitoreo del Ambiente , Plantas , Fenotipo
7.
Artículo en Inglés | MEDLINE | ID: mdl-32406835

RESUMEN

We address the complex problem of reliably segmenting root structure from soil in X-ray Computed Tomography (CT) images. We utilise a deep learning approach, and propose a state-of-the-art multi-resolution architecture based on encoderdecoders. While previous work in encoder-decoders implies the use of multiple resolutions simply by downsampling and upsampling images, we make this process explicit, with branches of the network tasked separately with obtaining local high-resolution segmentation, and wider low-resolution contextual information. The complete network is a memory efficient implementation that is still able to resolve small root detail in large volumetric images. We compare against a number of different encoder-decoder based architectures from the literature, as well as a popular existing image analysis tool designed for root CT segmentation. We show qualitatively and quantitatively that a multi-resolution approach offers substantial accuracy improvements over a both a small receptive field size in a deep network, or a larger receptive field in a shallower network. We then further improve performance using an incremental learning approach, in which failures in the original network are used to generate harder negative training examples. Our proposed method requires no user interaction, is fully automatic, and identifies large and fine root material throughout the whole volume.

8.
Plant Methods ; 16: 29, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32165909

RESUMEN

BACKGROUND: Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. RESULTS: Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment. CONCLUSION: The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.

9.
Mach Vis Appl ; 31(1): 2, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31894176

RESUMEN

There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.

10.
IEEE/ACM Trans Comput Biol Bioinform ; 17(6): 1907-1917, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31027044

RESUMEN

Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Biológicos , Brotes de la Planta , Algoritmos , Biología Computacional , Fenotipo , Brotes de la Planta/anatomía & histología , Brotes de la Planta/clasificación , Brotes de la Planta/fisiología , Plantas/anatomía & histología , Plantas/clasificación , Programas Informáticos , Propiedades de Superficie
11.
Front Plant Sci ; 10: 1516, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31850020

RESUMEN

Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.

12.
Gigascience ; 8(11)2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31702012

RESUMEN

BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Raíces de Plantas/anatomía & histología , Raíces de Plantas/crecimiento & desarrollo
13.
Plant Methods ; 15: 131, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31728153

RESUMEN

BACKGROUND: Root and tuber crops are becoming more important for their high source of carbohydrates, next to cereals. Despite their commercial impact, there are significant knowledge gaps about the environmental and inherent regulation of storage root (SR) differentiation, due in part to the innate problems of studying storage roots and the lack of a suitable model system for monitoring storage root growth. The research presented here aimed to develop a reliable, low-cost effective system that enables the study of the factors influencing cassava storage root initiation and development. RESULTS: We explored simple, low-cost systems for the study of storage root biology. An aeroponics system described here is ideal for real-time monitoring of storage root development (SRD), and this was further validated using hormone studies. Our aeroponics-based auxin studies revealed that storage root initiation and development are adaptive responses, which are significantly enhanced by the exogenous auxin supply. Field and histological experiments were also conducted to confirm the auxin effect found in the aeroponics system. We also developed a simple digital imaging platform to quantify storage root growth and development traits. Correlation analysis confirmed that image-based estimation can be a surrogate for manual root phenotyping for several key traits. CONCLUSIONS: The aeroponic system developed from this study is an effective tool for examining the root architecture of cassava during early SRD. The aeroponic system also provided novel insights into storage root formation by activating the auxin-dependent proliferation of secondary xylem parenchyma cells to induce the initial root thickening and bulking. The developed system can be of direct benefit to molecular biologists, breeders, and physiologists, allowing them to screen germplasm for root traits that correlate with improved economic traits.

14.
Plant Physiol ; 181(1): 28-42, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31331997

RESUMEN

Understanding the relationships between local environmental conditions and plant structure and function is critical for both fundamental science and for improving the performance of crops in field settings. Wind-induced plant motion is important in most agricultural systems, yet the complexity of the field environment means that it remained understudied. Despite the ready availability of image sequences showing plant motion, the cultivation of crop plants in dense field stands makes it difficult to detect features and characterize their general movement traits. Here, we present a robust method for characterizing motion in field-grown wheat plants (Triticum aestivum) from time-ordered sequences of red, green, and blue images. A series of crops and augmentations was applied to a dataset of 290 collected and annotated images of ear tips to increase variation and resolution when training a convolutional neural network. This approach enables wheat ears to be detected in the field without the need for camera calibration or a fixed imaging position. Videos of wheat plants moving in the wind were also collected and split into their component frames. Ear tips were detected using the trained network, then tracked between frames using a probabilistic tracking algorithm to approximate movement. These data can be used to characterize key movement traits, such as periodicity, and obtain more detailed static plant properties to assess plant structure and function in the field. Automated data extraction may be possible for informing lodging models, breeding programs, and linking movement properties to canopy light distributions and dynamic light fluctuation.


Asunto(s)
Aprendizaje Profundo , Triticum/fisiología , Agricultura , Algoritmos , Cruzamiento , Productos Agrícolas , Ambiente , Movimiento (Física) , Fenotipo , Viento
15.
J Synchrotron Radiat ; 26(Pt 3): 839-853, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31074449

RESUMEN

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

16.
Plant Physiol ; 178(2): 524-534, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30097468

RESUMEN

Three-dimensional (3D) computer-generated models of plants are urgently needed to support both phenotyping and simulation-based studies such as photosynthesis modeling. However, the construction of accurate 3D plant models is challenging, as plants are complex objects with an intricate leaf structure, often consisting of thin and highly reflective surfaces that vary in shape and size, forming dense, complex, crowded scenes. We address these issues within an image-based method by taking an active vision approach, one that investigates the scene to intelligently capture images, to image acquisition. Rather than use the same camera positions for all plants, our technique is to acquire the images needed to reconstruct the target plant, tuning camera placement to match the plant's individual structure. Our method also combines volumetric- and surface-based reconstruction methods and determines the necessary images based on the analysis of voxel clusters. We describe a fully automatic plant modeling/phenotyping cell (or module) comprising a six-axis robot and a high-precision turntable. By using a standard color camera, we overcome the difficulties associated with laser-based plant reconstruction methods. The 3D models produced are compared with those obtained from fixed cameras and evaluated by comparison with data obtained by x-ray microcomputed tomography across different plant structures. Our results show that our method is successful in improving the accuracy and quality of data obtained from a variety of plant types.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Anatómicos , Brotes de la Planta/anatomía & histología , Plantas/anatomía & histología , Microtomografía por Rayos X/métodos , Algoritmos , Calibración , Fenotipo , Hojas de la Planta/anatomía & histología
19.
Nat Commun ; 9(1): 1408, 2018 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-29650967

RESUMEN

Root traits such as root angle and hair length influence resource acquisition particularly for immobile nutrients like phosphorus (P). Here, we attempted to modify root angle in rice by disrupting the OsAUX1 auxin influx transporter gene in an effort to improve rice P acquisition efficiency. We show by X-ray microCT imaging that root angle is altered in the osaux1 mutant, causing preferential foraging in the top soil where P normally accumulates, yet surprisingly, P acquisition efficiency does not improve. Through closer investigation, we reveal that OsAUX1 also promotes root hair elongation in response to P limitation. Reporter studies reveal that auxin response increases in the root hair zone in low P environments. We demonstrate that OsAUX1 functions to mobilize auxin from the root apex to the differentiation zone where this signal promotes hair elongation when roots encounter low external P. We conclude that auxin and OsAUX1 play key roles in promoting root foraging for P in rice.


Asunto(s)
Regulación de la Expresión Génica de las Plantas , Organogénesis de las Plantas/efectos de los fármacos , Oryza/efectos de los fármacos , Fosfatos/farmacología , Raíces de Plantas/efectos de los fármacos , Gravitropismo/fisiología , Ácidos Indolacéticos/metabolismo , Proteínas de Transporte de Membrana/genética , Proteínas de Transporte de Membrana/metabolismo , Organogénesis de las Plantas/genética , Oryza/genética , Oryza/crecimiento & desarrollo , Oryza/metabolismo , Fosfatos/deficiencia , Reguladores del Crecimiento de las Plantas/metabolismo , Raíces de Plantas/genética , Raíces de Plantas/crecimiento & desarrollo , Raíces de Plantas/metabolismo , Plantas Modificadas Genéticamente , Estrés Fisiológico
20.
Plant Cell Environ ; 41(1): 121-133, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28503782

RESUMEN

Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system.


Asunto(s)
Imagenología Tridimensional , Modelos Biológicos , Raíces de Plantas/metabolismo , Suelo/química , Tomografía Computarizada por Rayos X , Agua/metabolismo , Raíces de Plantas/anatomía & histología , Porosidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...