Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Synchrotron Radiat ; 26(Pt 3): 839-853, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31074449

RESUMEN

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

2.
Plant Physiol ; 178(2): 524-534, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30097468

RESUMEN

Three-dimensional (3D) computer-generated models of plants are urgently needed to support both phenotyping and simulation-based studies such as photosynthesis modeling. However, the construction of accurate 3D plant models is challenging, as plants are complex objects with an intricate leaf structure, often consisting of thin and highly reflective surfaces that vary in shape and size, forming dense, complex, crowded scenes. We address these issues within an image-based method by taking an active vision approach, one that investigates the scene to intelligently capture images, to image acquisition. Rather than use the same camera positions for all plants, our technique is to acquire the images needed to reconstruct the target plant, tuning camera placement to match the plant's individual structure. Our method also combines volumetric- and surface-based reconstruction methods and determines the necessary images based on the analysis of voxel clusters. We describe a fully automatic plant modeling/phenotyping cell (or module) comprising a six-axis robot and a high-precision turntable. By using a standard color camera, we overcome the difficulties associated with laser-based plant reconstruction methods. The 3D models produced are compared with those obtained from fixed cameras and evaluated by comparison with data obtained by x-ray microcomputed tomography across different plant structures. Our results show that our method is successful in improving the accuracy and quality of data obtained from a variety of plant types.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Anatómicos , Brotes de la Planta/anatomía & histología , Plantas/anatomía & histología , Microtomografía por Rayos X/métodos , Algoritmos , Calibración , Fenotipo , Hojas de la Planta/anatomía & histología
3.
J Struct Biol ; 198(1): 43-53, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28246039

RESUMEN

Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.


Asunto(s)
Conjuntos de Datos como Asunto , Programas Informáticos , Algoritmos , Curaduría de Datos/métodos , Aprendizaje Automático
4.
Plant Cell ; 26(3): 862-75, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24632533

RESUMEN

Auxin is a key regulator of plant growth and development. Within the root tip, auxin distribution plays a crucial role specifying developmental zones and coordinating tropic responses. Determining how the organ-scale auxin pattern is regulated at the cellular scale is essential to understanding how these processes are controlled. In this study, we developed an auxin transport model based on actual root cell geometries and carrier subcellular localizations. We tested model predictions using the DII-VENUS auxin sensor in conjunction with state-of-the-art segmentation tools. Our study revealed that auxin efflux carriers alone cannot create the pattern of auxin distribution at the root tip and that AUX1/LAX influx carriers are also required. We observed that AUX1 in lateral root cap (LRC) and elongating epidermal cells greatly enhance auxin's shootward flux, with this flux being predominantly through the LRC, entering the epidermal cells only as they enter the elongation zone. We conclude that the nonpolar AUX1/LAX influx carriers control which tissues have high auxin levels, whereas the polar PIN carriers control the direction of auxin transport within these tissues.


Asunto(s)
Arabidopsis/metabolismo , Ácidos Indolacéticos/metabolismo , Raíces de Plantas/metabolismo , Transporte Biológico , Fracciones Subcelulares/metabolismo
5.
Plant Physiol ; 166(4): 1688-98, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25332504

RESUMEN

Increased adoption of the systems approach to biological research has focused attention on the use of quantitative models of biological objects. This includes a need for realistic three-dimensional (3D) representations of plant shoots for quantification and modeling. Previous limitations in single-view or multiple-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level-set method, optimizing the model based on image information, curvature constraints, and the position of neighboring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed and, as such, is applicable to a wide variety of plant species and topologies and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on data sets of wheat (Triticum aestivum) and rice (Oryza sativa) plants as well as a unique virtual data set that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modeling applications in a format that can be imported in the majority of 3D graphics and software packages.


Asunto(s)
Imagenología Tridimensional/métodos , Oryza/citología , Triticum/citología , Algoritmos , Modelos Teóricos , Oryza/crecimiento & desarrollo , Hojas de la Planta/citología , Hojas de la Planta/crecimiento & desarrollo , Brotes de la Planta/citología , Brotes de la Planta/crecimiento & desarrollo , Programas Informáticos , Triticum/crecimiento & desarrollo
6.
Plant Cell ; 24(4): 1353-61, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22474181

RESUMEN

It is increasingly important in life sciences that many cell-scale and tissue-scale measurements are quantified from confocal microscope images. However, extracting and analyzing large-scale confocal image data sets represents a major bottleneck for researchers. To aid this process, CellSeT software has been developed, which utilizes tissue-scale structure to help segment individual cells. We provide examples of how the CellSeT software can be used to quantify fluorescence of hormone-responsive nuclear reporters, determine membrane protein polarity, extract cell and tissue geometry for use in later modeling, and take many additional biologically relevant measures using an extensible plug-in toolset. Application of CellSeT promises to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research.


Asunto(s)
Arabidopsis/citología , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Confocal/métodos , Células Vegetales/metabolismo , Programas Informáticos , Estadística como Asunto , Arabidopsis/metabolismo , Biomarcadores/metabolismo , Membrana Celular/metabolismo , Núcleo Celular/metabolismo , Fluorescencia , Genes Reporteros
7.
Proc Natl Acad Sci U S A ; 109(12): 4668-73, 2012 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-22393022

RESUMEN

Gravity profoundly influences plant growth and development. Plants respond to changes in orientation by using gravitropic responses to modify their growth. Cholodny and Went hypothesized over 80 years ago that plants bend in response to a gravity stimulus by generating a lateral gradient of a growth regulator at an organ's apex, later found to be auxin. Auxin regulates root growth by targeting Aux/IAA repressor proteins for degradation. We used an Aux/IAA-based reporter, domain II (DII)-VENUS, in conjunction with a mathematical model to quantify auxin redistribution following a gravity stimulus. Our multidisciplinary approach revealed that auxin is rapidly redistributed to the lower side of the root within minutes of a 90° gravity stimulus. Unexpectedly, auxin asymmetry was rapidly lost as bending root tips reached an angle of 40° to the horizontal. We hypothesize roots use a "tipping point" mechanism that operates to reverse the asymmetric auxin flow at the midpoint of root bending. These mechanistic insights illustrate the scientific value of developing quantitative reporters such as DII-VENUS in conjunction with parameterized mathematical models to provide high-resolution kinetics of hormone redistribution.


Asunto(s)
Arabidopsis/metabolismo , Ácidos Indolacéticos/metabolismo , Raíces de Plantas/metabolismo , Arabidopsis/crecimiento & desarrollo , Relación Dosis-Respuesta a Droga , Ambiente , Gravitropismo/fisiología , Cinética , Modelos Biológicos , Modelos Teóricos , Fenómenos Fisiológicos de las Plantas , Raíces de Plantas/crecimiento & desarrollo , Raíces de Plantas/fisiología , Transducción de Señal , Biología de Sistemas/métodos , Factores de Tiempo
8.
Mol Syst Biol ; 9: 699, 2013 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-24150423

RESUMEN

In Arabidopsis, lateral roots originate from pericycle cells deep within the primary root. New lateral root primordia (LRP) have to emerge through several overlaying tissues. Here, we report that auxin produced in new LRP is transported towards the outer tissues where it triggers cell separation by inducing both the auxin influx carrier LAX3 and cell-wall enzymes. LAX3 is expressed in just two cell files overlaying new LRP. To understand how this striking pattern of LAX3 expression is regulated, we developed a mathematical model that captures the network regulating its expression and auxin transport within realistic three-dimensional cell and tissue geometries. Our model revealed that, for the LAX3 spatial expression to be robust to natural variations in root tissue geometry, an efflux carrier is required--later identified to be PIN3. To prevent LAX3 from being transiently expressed in multiple cell files, PIN3 and LAX3 must be induced consecutively, which we later demonstrated to be the case. Our study exemplifies how mathematical models can be used to direct experiments to elucidate complex developmental processes.


Asunto(s)
Proteínas de Arabidopsis/metabolismo , Arabidopsis/metabolismo , Regulación de la Expresión Génica de las Plantas , Ácidos Indolacéticos/metabolismo , Proteínas de Transporte de Membrana/metabolismo , Raíces de Plantas/metabolismo , Arabidopsis/genética , Arabidopsis/crecimiento & desarrollo , Proteínas de Arabidopsis/genética , Transporte Biológico , Pared Celular/genética , Pared Celular/metabolismo , Perfilación de la Expresión Génica , Regulación del Desarrollo de la Expresión Génica , Proteínas de Transporte de Membrana/genética , Modelos Genéticos , Especificidad de Órganos , Raíces de Plantas/genética , Raíces de Plantas/crecimiento & desarrollo , Transducción de Señal
9.
New Phytol ; 202(4): 1212-1222, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24641449

RESUMEN

Root elongation and bending require the coordinated expansion of multiple cells of different types. These processes are regulated by the action of hormones that can target distinct cell layers. We use a mathematical model to characterise the influence of the biomechanical properties of individual cell walls on the properties of the whole tissue. Taking a simple constitutive model at the cell scale which characterises cell walls via yield and extensibility parameters, we derive the analogous tissue-level model to describe elongation and bending. To accurately parameterise the model, we take detailed measurements of cell turgor, cell geometries and wall thicknesses. The model demonstrates how cell properties and shapes contribute to tissue-level extensibility and yield. Exploiting the highly organised structure of the elongation zone (EZ) of the Arabidopsis root, we quantify the contributions of different cell layers, using the measured parameters. We show how distributions of material and geometric properties across the root cross-section contribute to the generation of curvature, and relate the angle of a gravitropic bend to the magnitude and duration of asymmetric wall softening. We quantify the geometric factors which lead to the predominant contribution of the outer cell files in driving root elongation and bending.


Asunto(s)
Arabidopsis/fisiología , Gravitropismo , Raíces de Plantas/fisiología , Arabidopsis/citología , Arabidopsis/crecimiento & desarrollo , Pared Celular/metabolismo , Fenómenos Mecánicos , Microscopía Electrónica de Transmisión , Modelos Teóricos , Especificidad de Órganos , Raíces de Plantas/citología , Raíces de Plantas/crecimiento & desarrollo
10.
Plant Physiol ; 162(4): 1802-14, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23766367

RESUMEN

We present a novel image analysis tool that allows the semiautomated quantification of complex root system architectures in a range of plant species grown and imaged in a variety of ways. The automatic component of RootNav takes a top-down approach, utilizing the powerful expectation maximization classification algorithm to examine regions of the input image, calculating the likelihood that given pixels correspond to roots. This information is used as the basis for an optimization approach to root detection and quantification, which effectively fits a root model to the image data. The resulting user experience is akin to defining routes on a motorist's satellite navigation system: RootNav makes an initial optimized estimate of paths from the seed point to root apices, and the user is able to easily and intuitively refine the results using a visual approach. The proposed method is evaluated on winter wheat (Triticum aestivum) images (and demonstrated on Arabidopsis [Arabidopsis thaliana], Brassica napus, and rice [Oryza sativa]), and results are compared with manual analysis. Four exemplar traits are calculated and show clear illustrative differences between some of the wheat accessions. RootNav, however, provides the structural information needed to support extraction of a wider variety of biologically relevant measures. A separate viewer tool is provided to recover a rich set of architectural traits from RootNav's core representation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Raíces de Plantas/anatomía & histología , Programas Informáticos , Algoritmos , Arabidopsis/anatomía & histología , Brassica napus/anatomía & histología , Meristema/anatomía & histología , Oryza/anatomía & histología , Raíces de Plantas/fisiología , Semillas/crecimiento & desarrollo , Triticum/anatomía & histología , Interfaz Usuario-Computador
11.
Biomed Phys Eng Express ; 10(3)2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38350128

RESUMEN

The paper aims to explore the current state of understanding surrounding in silico oral modelling. This involves exploring methodologies, technologies and approaches pertaining to the modelling of the whole oral cavity; both internally and externally visible structures that may be relevant or appropriate to oral actions. Such a model could be referred to as a 'complete model' which includes consideration of a full set of facial features (i.e. not only mouth) as well as synergistic stimuli such as audio and facial thermal data. 3D modelling technologies capable of accurately and efficiently capturing a complete representation of the mouth for an individual have broad applications in the study of oral actions, due to their cost-effectiveness and time efficiency. This review delves into the field of clinical phonetics to classify oral actions pertaining to both speech and non-speech movements, identifying how the various vocal organs play a role in the articulatory and masticatory process. Vitaly, it provides a summation of 12 articulatory recording methods, forming a tool to be used by researchers in identifying which method of recording is appropriate for their work. After addressing the cost and resource-intensive limitations of existing methods, a new system of modelling is proposed that leverages external to internal correlation modelling techniques to create a more efficient models of the oral cavity. The vision is that the outcomes will be applicable to a broad spectrum of oral functions related to physiology, health and wellbeing, including speech, oral processing of foods as well as dental health. The applications may span from speech correction, designing foods for the aging population, whilst in the dental field we would be able to gain information about patient's oral actions that would become part of creating a personalised dental treatment plan.


Asunto(s)
Boca , Habla , Humanos , Anciano , Boca/fisiología , Habla/fisiología , Fonética
12.
Bioinformatics ; 27(9): 1337-8, 2011 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-21398671

RESUMEN

SUMMARY: The original RootTrace tool has proved successful in measuring primary root lengths across time series image data. Biologists have shown interest in using the tool to address further problems, namely counting lateral roots to use as parameters in screening studies, and measuring highly curved roots. To address this, the software has been extended to count emerged lateral roots, and the tracking model extended so that strongly curved and agravitropic roots can be now be recovered. Here, we describe the novel image analysis algorithms and user interface implemented within the RootTrace framework to handle such situations and evaluate the results. AVAILABILITY: The software is open source and available from http://sourceforge.net/projects/roottrace.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Raíces de Plantas/crecimiento & desarrollo , Programas Informáticos , Modelos Biológicos , Interfaz Usuario-Computador
13.
Radiol Artif Intell ; 4(6): e220096, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36523645

RESUMEN

This study evaluated deep learning algorithms for semantic segmentation and quantification of intracerebral hemorrhage (ICH), perihematomal edema (PHE), and intraventricular hemorrhage (IVH) on noncontrast CT scans of patients with spontaneous ICH. Models were assessed on 1732 annotated baseline noncontrast CT scans obtained from the Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (ie, TICH-2) international multicenter trial (ISRCTN93732214), and different loss functions using a three-dimensional no-new-U-Net (nnU-Net) were examined to address class imbalance (30% of participants with IVH in dataset). On the test cohort (n = 174, 10% of dataset), the top-performing models achieved median Dice similarity coefficients of 0.92 (IQR, 0.89-0.94), 0.66 (0.58-0.71), and 1.00 (0.87-1.00), respectively, for ICH, PHE, and IVH segmentation. U-Net-based networks showed comparable, satisfactory performances on ICH and PHE segmentations (P > .05), but all nnU-Net variants achieved higher accuracy than the Brain Lesion Analysis and Segmentation Tool for CT (BLAST-CT) and DeepLabv3+ for all labels (P < .05). The Focal model showed improved performance in IVH segmentation compared with the Tversky, two-dimensional nnU-Net, U-Net, BLAST-CT, and DeepLabv3+ models (P < .05). Focal achieved concordance values of 0.98, 0.88, and 0.99 for ICH, PHE, and ICH volumes, respectively. The mean volumetric differences between the ground truth and prediction were 0.32 mL (95% CI: -8.35, 9.00), 1.14 mL (-9.53, 11.8), and 0.06 mL (-1.71, 1.84), respectively. In conclusion, U-Net-based networks provide accurate segmentation on CT images of spontaneous ICH, and Focal loss can address class imbalance. International Clinical Trials Registry Platform (ICTRP) no. ISRCTN93732214 Supplemental material is available for this article. © RSNA, 2022 Keywords: Head/Neck, Brain/Brain Stem, Hemorrhage, Segmentation, Quantification, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms.

14.
Plants (Basel) ; 10(12)2021 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-34961104

RESUMEN

Wheat head detection is a core computer vision problem related to plant phenotyping that in recent years has seen increased interest as large-scale datasets have been made available for use in research. In deep learning problems with limited training data, synthetic data have been shown to improve performance by increasing the number of training examples available but have had limited effectiveness due to domain shift. To overcome this, many adversarial approaches such as Generative Adversarial Networks (GANs) have been proposed as a solution by better aligning the distribution of synthetic data to that of real images through domain augmentation. In this paper, we examine the impacts of performing wheat head detection on the global wheat head challenge dataset using synthetic data to supplement the original dataset. Through our experimentation, we demonstrate the challenges of performing domain augmentation where the target domain is large and diverse. We then present a novel approach to improving scores through using heatmap regression as a support network, and clustering to combat high variation of the target domain.

15.
Sci Rep ; 11(1): 23279, 2021 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-34857791

RESUMEN

Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.

16.
Plant Phenomics ; 2021: 9874597, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34708214

RESUMEN

3D reconstruction of fruit is important as a key component of fruit grading and an important part of many size estimation pipelines. Like many computer vision challenges, the 3D reconstruction task suffers from a lack of readily available training data in most domains, with methods typically depending on large datasets of high-quality image-model pairs. In this paper, we propose an unsupervised domain-adaptation approach to 3D reconstruction where labelled images only exist in our source synthetic domain, and training is supplemented with different unlabelled datasets from the target real domain. We approach the problem of 3D reconstruction using volumetric regression and produce a training set of 25,000 pairs of images and volumes using hand-crafted 3D models of bananas rendered in a 3D modelling environment (Blender). Each image is then enhanced by a GAN to more closely match the domain of photographs of real images by introducing a volumetric consistency loss, improving performance of 3D reconstruction on real images. Our solution harnesses the cost benefits of synthetic data while still maintaining good performance on real world images. We focus this work on the task of 3D banana reconstruction from a single image, representing a common task in plant phenotyping, but this approach is general and may be adapted to any 3D reconstruction task including other plant species and organs.

17.
Mach Vis Appl ; 31(1): 2, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31894176

RESUMEN

There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.

18.
Front Plant Sci ; 11: 1275, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32983190

RESUMEN

Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development-a process referred to as plant phenotyping-is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online.

19.
Plant Methods ; 16: 29, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32165909

RESUMEN

BACKGROUND: Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. RESULTS: Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment. CONCLUSION: The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.

20.
IEEE/ACM Trans Comput Biol Bioinform ; 17(6): 1907-1917, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31027044

RESUMEN

Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Biológicos , Brotes de la Planta , Algoritmos , Biología Computacional , Fenotipo , Brotes de la Planta/anatomía & histología , Brotes de la Planta/clasificación , Brotes de la Planta/fisiología , Plantas/anatomía & histología , Plantas/clasificación , Programas Informáticos , Propiedades de Superficie
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA