Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
Biomed Phys Eng Express ; 10(3)2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38350128

RESUMEN

The paper aims to explore the current state of understanding surrounding in silico oral modelling. This involves exploring methodologies, technologies and approaches pertaining to the modelling of the whole oral cavity; both internally and externally visible structures that may be relevant or appropriate to oral actions. Such a model could be referred to as a 'complete model' which includes consideration of a full set of facial features (i.e. not only mouth) as well as synergistic stimuli such as audio and facial thermal data. 3D modelling technologies capable of accurately and efficiently capturing a complete representation of the mouth for an individual have broad applications in the study of oral actions, due to their cost-effectiveness and time efficiency. This review delves into the field of clinical phonetics to classify oral actions pertaining to both speech and non-speech movements, identifying how the various vocal organs play a role in the articulatory and masticatory process. Vitaly, it provides a summation of 12 articulatory recording methods, forming a tool to be used by researchers in identifying which method of recording is appropriate for their work. After addressing the cost and resource-intensive limitations of existing methods, a new system of modelling is proposed that leverages external to internal correlation modelling techniques to create a more efficient models of the oral cavity. The vision is that the outcomes will be applicable to a broad spectrum of oral functions related to physiology, health and wellbeing, including speech, oral processing of foods as well as dental health. The applications may span from speech correction, designing foods for the aging population, whilst in the dental field we would be able to gain information about patient's oral actions that would become part of creating a personalised dental treatment plan.


Asunto(s)
Boca , Habla , Humanos , Anciano , Boca/fisiología , Habla/fisiología , Fonética
2.
Radiol Artif Intell ; 4(6): e220096, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36523645

RESUMEN

This study evaluated deep learning algorithms for semantic segmentation and quantification of intracerebral hemorrhage (ICH), perihematomal edema (PHE), and intraventricular hemorrhage (IVH) on noncontrast CT scans of patients with spontaneous ICH. Models were assessed on 1732 annotated baseline noncontrast CT scans obtained from the Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (ie, TICH-2) international multicenter trial (ISRCTN93732214), and different loss functions using a three-dimensional no-new-U-Net (nnU-Net) were examined to address class imbalance (30% of participants with IVH in dataset). On the test cohort (n = 174, 10% of dataset), the top-performing models achieved median Dice similarity coefficients of 0.92 (IQR, 0.89-0.94), 0.66 (0.58-0.71), and 1.00 (0.87-1.00), respectively, for ICH, PHE, and IVH segmentation. U-Net-based networks showed comparable, satisfactory performances on ICH and PHE segmentations (P > .05), but all nnU-Net variants achieved higher accuracy than the Brain Lesion Analysis and Segmentation Tool for CT (BLAST-CT) and DeepLabv3+ for all labels (P < .05). The Focal model showed improved performance in IVH segmentation compared with the Tversky, two-dimensional nnU-Net, U-Net, BLAST-CT, and DeepLabv3+ models (P < .05). Focal achieved concordance values of 0.98, 0.88, and 0.99 for ICH, PHE, and ICH volumes, respectively. The mean volumetric differences between the ground truth and prediction were 0.32 mL (95% CI: -8.35, 9.00), 1.14 mL (-9.53, 11.8), and 0.06 mL (-1.71, 1.84), respectively. In conclusion, U-Net-based networks provide accurate segmentation on CT images of spontaneous ICH, and Focal loss can address class imbalance. International Clinical Trials Registry Platform (ICTRP) no. ISRCTN93732214 Supplemental material is available for this article. © RSNA, 2022 Keywords: Head/Neck, Brain/Brain Stem, Hemorrhage, Segmentation, Quantification, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms.

3.
Sci Rep ; 11(1): 23279, 2021 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-34857791

RESUMEN

Recently, several convolutional neural networks have been proposed not only for 2D images, but also for 3D and 4D volume segmentation. Nevertheless, due to the large data size of the latter, acquiring a sufficient amount of training annotations is much more strenuous than in 2D images. For 4D time-series tomograms, this is usually handled by segmenting the constituent tomograms independently through time with 3D convolutional neural networks. Inter-volume information is therefore not utilized, potentially leading to temporal incoherence. In this paper, we attempt to resolve this by proposing two hidden Markov model variants that refine 4D segmentation labels made by 3D convolutional neural networks working on each time point. Our models utilize not only inter-volume information, but also the prediction confidence generated by the 3D segmentation convolutional neural networks themselves. To the best of our knowledge, this is the first attempt to refine 4D segmentations made by 3D convolutional neural networks using hidden Markov models. During experiments we test our models, qualitatively, quantitatively and behaviourally, using prespecified segmentations. We demonstrate in the domain of time series tomograms which are typically undersampled to allow more frequent capture; a particularly challenging problem. Finally, our dataset and code is publicly available.

4.
Plants (Basel) ; 10(12)2021 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-34961104

RESUMEN

Wheat head detection is a core computer vision problem related to plant phenotyping that in recent years has seen increased interest as large-scale datasets have been made available for use in research. In deep learning problems with limited training data, synthetic data have been shown to improve performance by increasing the number of training examples available but have had limited effectiveness due to domain shift. To overcome this, many adversarial approaches such as Generative Adversarial Networks (GANs) have been proposed as a solution by better aligning the distribution of synthetic data to that of real images through domain augmentation. In this paper, we examine the impacts of performing wheat head detection on the global wheat head challenge dataset using synthetic data to supplement the original dataset. Through our experimentation, we demonstrate the challenges of performing domain augmentation where the target domain is large and diverse. We then present a novel approach to improving scores through using heatmap regression as a support network, and clustering to combat high variation of the target domain.

5.
Plant Phenomics ; 2021: 9874597, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34708214

RESUMEN

3D reconstruction of fruit is important as a key component of fruit grading and an important part of many size estimation pipelines. Like many computer vision challenges, the 3D reconstruction task suffers from a lack of readily available training data in most domains, with methods typically depending on large datasets of high-quality image-model pairs. In this paper, we propose an unsupervised domain-adaptation approach to 3D reconstruction where labelled images only exist in our source synthetic domain, and training is supplemented with different unlabelled datasets from the target real domain. We approach the problem of 3D reconstruction using volumetric regression and produce a training set of 25,000 pairs of images and volumes using hand-crafted 3D models of bananas rendered in a 3D modelling environment (Blender). Each image is then enhanced by a GAN to more closely match the domain of photographs of real images by introducing a volumetric consistency loss, improving performance of 3D reconstruction on real images. Our solution harnesses the cost benefits of synthetic data while still maintaining good performance on real world images. We focus this work on the task of 3D banana reconstruction from a single image, representing a common task in plant phenotyping, but this approach is general and may be adapted to any 3D reconstruction task including other plant species and organs.

6.
Front Plant Sci ; 11: 1275, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32983190

RESUMEN

Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development-a process referred to as plant phenotyping-is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online.

7.
Plant Methods ; 16: 29, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32165909

RESUMEN

BACKGROUND: Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments. RESULTS: Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions (APs@IoU0.5) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment. CONCLUSION: The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.

8.
Mach Vis Appl ; 31(1): 2, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31894176

RESUMEN

There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device.

9.
IEEE/ACM Trans Comput Biol Bioinform ; 17(6): 1907-1917, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31027044

RESUMEN

Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Biológicos , Brotes de la Planta , Algoritmos , Biología Computacional , Fenotipo , Brotes de la Planta/anatomía & histología , Brotes de la Planta/clasificación , Brotes de la Planta/fisiología , Plantas/anatomía & histología , Plantas/clasificación , Programas Informáticos , Propiedades de Superficie
10.
Front Plant Sci ; 10: 1516, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31850020

RESUMEN

Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task.

11.
Gigascience ; 8(11)2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31702012

RESUMEN

BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Raíces de Plantas/anatomía & histología , Raíces de Plantas/crecimiento & desarrollo
12.
Plant Methods ; 15: 131, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31728153

RESUMEN

BACKGROUND: Root and tuber crops are becoming more important for their high source of carbohydrates, next to cereals. Despite their commercial impact, there are significant knowledge gaps about the environmental and inherent regulation of storage root (SR) differentiation, due in part to the innate problems of studying storage roots and the lack of a suitable model system for monitoring storage root growth. The research presented here aimed to develop a reliable, low-cost effective system that enables the study of the factors influencing cassava storage root initiation and development. RESULTS: We explored simple, low-cost systems for the study of storage root biology. An aeroponics system described here is ideal for real-time monitoring of storage root development (SRD), and this was further validated using hormone studies. Our aeroponics-based auxin studies revealed that storage root initiation and development are adaptive responses, which are significantly enhanced by the exogenous auxin supply. Field and histological experiments were also conducted to confirm the auxin effect found in the aeroponics system. We also developed a simple digital imaging platform to quantify storage root growth and development traits. Correlation analysis confirmed that image-based estimation can be a surrogate for manual root phenotyping for several key traits. CONCLUSIONS: The aeroponic system developed from this study is an effective tool for examining the root architecture of cassava during early SRD. The aeroponic system also provided novel insights into storage root formation by activating the auxin-dependent proliferation of secondary xylem parenchyma cells to induce the initial root thickening and bulking. The developed system can be of direct benefit to molecular biologists, breeders, and physiologists, allowing them to screen germplasm for root traits that correlate with improved economic traits.

13.
J Synchrotron Radiat ; 26(Pt 3): 839-853, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31074449

RESUMEN

X-ray computed tomography and, specifically, time-resolved volumetric tomography data collections (4D datasets) routinely produce terabytes of data, which need to be effectively processed after capture. This is often complicated due to the high rate of data collection required to capture at sufficient time-resolution events of interest in a time-series, compelling the researchers to perform data collection with a low number of projections for each tomogram in order to achieve the desired `frame rate'. It is common practice to collect a representative tomogram with many projections, after or before the time-critical portion of the experiment without detrimentally affecting the time-series to aid the analysis process. For this paper these highly sampled data are used to aid feature detection in the rapidly collected tomograms by assisting with the upsampling of their projections, which is equivalent to upscaling the θ-axis of the sinograms. In this paper, a super-resolution approach is proposed based on deep learning (termed an upscaling Deep Neural Network, or UDNN) that aims to upscale the sinogram space of individual tomograms in a 4D dataset of a sample. This is done using learned behaviour from a dataset containing a high number of projections, taken of the same sample and occurring at the beginning or the end of the data collection. The prior provided by the highly sampled tomogram allows the application of an upscaling process with better accuracy than existing interpolation techniques. This upscaling process subsequently permits an increase in the quality of the tomogram's reconstruction, especially in situations that require capture of only a limited number of projections, as is the case in high-frequency time-series capture. The increase in quality can prove very helpful for researchers, as downstream it enables easier segmentation of the tomograms in areas of interest, for example. The method itself comprises a convolutional neural network which through training learns an end-to-end mapping between sinograms with a low and a high number of projections. Since datasets can differ greatly between experiments, this approach specifically develops a lightweight network that can easily and quickly be retrained for different types of samples. As part of the evaluation of our technique, results with different hyperparameter settings are presented, and the method has been tested on both synthetic and real-world data. In addition, accompanying real-world experimental datasets have been released in the form of two 80 GB tomograms depicting a metallic pin that undergoes corruption from a droplet of salt water. Also a new engineering-based phantom dataset has been produced and released, inspired by the experimental datasets.

14.
Science ; 362(6421): 1407-1410, 2018 12 21.
Artículo en Inglés | MEDLINE | ID: mdl-30573626

RESUMEN

Plants adapt to heterogeneous soil conditions by altering their root architecture. For example, roots branch when in contact with water by using the hydropatterning response. We report that hydropatterning is dependent on auxin response factor ARF7. This transcription factor induces asymmetric expression of its target gene LBD16 in lateral root founder cells. This differential expression pattern is regulated by posttranslational modification of ARF7 with the small ubiquitin-like modifier (SUMO) protein. SUMOylation negatively regulates ARF7 DNA binding activity. ARF7 SUMOylation is required to recruit the Aux/IAA (indole-3-acetic acid) repressor protein IAA3. Blocking ARF7 SUMOylation disrupts IAA3 recruitment and hydropatterning. We conclude that SUMO-dependent regulation of auxin response controls root branching pattern in response to water availability.


Asunto(s)
Proteínas de Arabidopsis/metabolismo , Arabidopsis/crecimiento & desarrollo , Raíces de Plantas/crecimiento & desarrollo , Sumoilación , Factores de Transcripción/metabolismo , Agua/metabolismo , Arabidopsis/genética , Arabidopsis/metabolismo , Proteínas de Arabidopsis/genética , ADN de Plantas/metabolismo , Regulación de la Expresión Génica de las Plantas , Ácidos Indolacéticos/metabolismo , Proteínas Nucleares/metabolismo , Raíces de Plantas/genética , Raíces de Plantas/metabolismo , Unión Proteica , Proteína SUMO-1/metabolismo
15.
Plant Physiol ; 178(2): 524-534, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30097468

RESUMEN

Three-dimensional (3D) computer-generated models of plants are urgently needed to support both phenotyping and simulation-based studies such as photosynthesis modeling. However, the construction of accurate 3D plant models is challenging, as plants are complex objects with an intricate leaf structure, often consisting of thin and highly reflective surfaces that vary in shape and size, forming dense, complex, crowded scenes. We address these issues within an image-based method by taking an active vision approach, one that investigates the scene to intelligently capture images, to image acquisition. Rather than use the same camera positions for all plants, our technique is to acquire the images needed to reconstruct the target plant, tuning camera placement to match the plant's individual structure. Our method also combines volumetric- and surface-based reconstruction methods and determines the necessary images based on the analysis of voxel clusters. We describe a fully automatic plant modeling/phenotyping cell (or module) comprising a six-axis robot and a high-precision turntable. By using a standard color camera, we overcome the difficulties associated with laser-based plant reconstruction methods. The 3D models produced are compared with those obtained from fixed cameras and evaluated by comparison with data obtained by x-ray microcomputed tomography across different plant structures. Our results show that our method is successful in improving the accuracy and quality of data obtained from a variety of plant types.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Anatómicos , Brotes de la Planta/anatomía & histología , Plantas/anatomía & histología , Microtomografía por Rayos X/métodos , Algoritmos , Calibración , Fenotipo , Hojas de la Planta/anatomía & histología
17.
Front Plant Sci ; 9: 735, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29922313

RESUMEN

Phosphorus is a crucial macronutrient for plants playing a critical role in many cellular signaling and energy cycling processes. In light of this, phosphorus acquisition efficiency is an important target trait for crop improvement, but it also provides an ecological adaptation for growth of plants in low nutrient environments. Increased root hair density has been shown to improve phosphorus uptake and plant health in a number of species. In several plant families, including Brassicaceae, root hair bearing cells are positioned on the epidermis according to their position in relation to cortex cells, with hair cells positioned in the cleft between two underlying cortex cells. Thus the number of cortex cells determines the number of epidermal cells in the root hair position. Previous research has associated phosphorus-limiting conditions with an increase in the number of cortex cell files in Arabidopsis thaliana roots, but they have not investigated the spatial or temporal domains in which these extra divisions occur or explored the consequences this has had on root hair formation. In this study, we use 3D reconstructions of root meristems to demonstrate that the radial anticlinal cell divisions seen under low phosphate are exclusive to the cortex. When grown on media containing replete levels of phosphorous, A. thaliana plants almost invariably show eight cortex cells; however when grown in phosphate limited conditions, seedlings develop up to 16 cortex cells (with 10-14 being the most typical). This results in a significant increase in the number of epidermal cells at hair forming positions. These radial anticlinal divisions occur within the initial cells and can be seen within 24 h of transfer of plants to low phosphorous conditions. We show that these changes in the underlying cortical cells feed into epidermal patterning by altering the regular spacing of root hairs.

18.
Plant Methods ; 13: 80, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29051772

RESUMEN

This review explores how imaging techniques are being developed with a focus on deployment for crop monitoring methods. Imaging applications are discussed in relation to both field and glasshouse-based plants, and techniques are sectioned into 'healthy and diseased plant classification' with an emphasis on classification accuracy, early detection of stress, and disease severity. A central focus of the review is the use of hyperspectral imaging and how this is being utilised to find additional information about plant health, and the ability to predict onset of disease. A summary of techniques used to detect biotic and abiotic stress in plants is presented, including the level of accuracy associated with each method.

19.
Gigascience ; 6(10): 1-10, 2017 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29020747

RESUMEN

In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.


Asunto(s)
Aprendizaje Automático , Raíces de Plantas/clasificación , Brotes de la Planta/clasificación , Fenotipo , Raíces de Plantas/genética , Brotes de la Planta/genética , Plantas , Sitios de Carácter Cuantitativo , Triticum/clasificación , Triticum/genética
20.
J Vis Exp ; (126)2017 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-28872144

RESUMEN

Segmentation is the process of isolating specific regions or objects within an imaged volume, so that further study can be undertaken on these areas of interest. When considering the analysis of complex biological systems, the segmentation of three-dimensional image data is a time consuming and labor intensive step. With the increased availability of many imaging modalities and with automated data collection schemes, this poses an increased challenge for the modern experimental biologist to move from data to knowledge. This publication describes the use of SuRVoS Workbench, a program designed to address these issues by providing methods to semi-automatically segment complex biological volumetric data. Three datasets of differing magnification and imaging modalities are presented here, each highlighting different strategies of segmenting with SuRVoS. Phase contrast X-ray tomography (microCT) of the fruiting body of a plant is used to demonstrate segmentation using model training, cryo electron tomography (cryoET) of human platelets is used to demonstrate segmentation using super- and megavoxels, and cryo soft X-ray tomography (cryoSXT) of a mammalian cell line is used to demonstrate the label splitting tools. Strategies and parameters for each datatype are also presented. By blending a selection of semi-automatic processes into a single interactive tool, SuRVoS provides several benefits. Overall time to segment volumetric data is reduced by a factor of five when compared to manual segmentation, a mainstay in many image processing fields. This is a significant savings when full manual segmentation can take weeks of effort. Additionally, subjectivity is addressed through the use of computationally identified boundaries, and splitting complex collections of objects by their calculated properties rather than on a case-by-case basis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...