Your browser doesn't support javascript.
loading
RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures.
Yasrab, Robail; Atkinson, Jonathan A; Wells, Darren M; French, Andrew P; Pridmore, Tony P; Pound, Michael P.
Afiliación
  • Yasrab R; School of Computer Science, University of Nottingham, Jubilee Campus, Wollaton Road, Nottingham NG8 1BB, UK.
  • Atkinson JA; School of Biosciences, Sutton Bonington Campus, University of Nottingham, Nottingham LE12 5RD, UK.
  • Wells DM; School of Biosciences, Sutton Bonington Campus, University of Nottingham, Nottingham LE12 5RD, UK.
  • French AP; School of Computer Science, University of Nottingham, Jubilee Campus, Wollaton Road, Nottingham NG8 1BB, UK.
  • Pridmore TP; School of Biosciences, Sutton Bonington Campus, University of Nottingham, Nottingham LE12 5RD, UK.
  • Pound MP; School of Computer Science, University of Nottingham, Jubilee Campus, Wollaton Road, Nottingham NG8 1BB, UK.
Gigascience ; 8(11)2019 11 01.
Article en En | MEDLINE | ID: mdl-31702012
ABSTRACT

BACKGROUND:

In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction.

RESULTS:

We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images.

CONCLUSIONS:

We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http//rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Procesamiento de Imagen Asistido por Computador / Raíces de Plantas / Aprendizaje Profundo Idioma: En Revista: Gigascience Año: 2019 Tipo del documento: Article País de afiliación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Procesamiento de Imagen Asistido por Computador / Raíces de Plantas / Aprendizaje Profundo Idioma: En Revista: Gigascience Año: 2019 Tipo del documento: Article País de afiliación: Reino Unido
...