Your browser doesn't support javascript.
loading
A Mobile-Based Deep Learning Model for Cassava Disease Diagnosis.
Ramcharan, Amanda; McCloskey, Peter; Baranowski, Kelsee; Mbilinyi, Neema; Mrisho, Latifa; Ndalahwa, Mathias; Legg, James; Hughes, David P.
Affiliation
  • Ramcharan A; Department of Entomology, College of Agricultural Sciences, Penn State University, State College, PA, United States.
  • McCloskey P; Department of Entomology, College of Agricultural Sciences, Penn State University, State College, PA, United States.
  • Baranowski K; Department of Entomology, College of Agricultural Sciences, Penn State University, State College, PA, United States.
  • Mbilinyi N; International Institute for Tropical Agriculture, Dar el Salaam, Tanzania.
  • Mrisho L; International Institute for Tropical Agriculture, Dar el Salaam, Tanzania.
  • Ndalahwa M; International Institute for Tropical Agriculture, Dar el Salaam, Tanzania.
  • Legg J; International Institute for Tropical Agriculture, Dar el Salaam, Tanzania.
  • Hughes DP; Department of Entomology, College of Agricultural Sciences, Penn State University, State College, PA, United States.
Front Plant Sci ; 10: 272, 2019.
Article in En | MEDLINE | ID: mdl-30949185
Convolutional neural network (CNN) models have the potential to improve plant disease phenotyping where the standard approach is visual diagnostics requiring specialized training. In scenarios where a CNN is deployed on mobile devices, models are presented with new challenges due to lighting and orientation. It is essential for model assessment to be conducted in real world conditions if such models are to be reliably integrated with computer vision products for plant disease phenotyping. We train a CNN object detection model to identify foliar symptoms of diseases in cassava (Manihot esculenta Crantz). We then deploy the model in a mobile app and test its performance on mobile images and video of 720 diseased leaflets in an agricultural field in Tanzania. Within each disease category we test two levels of severity of symptoms-mild and pronounced, to assess the model performance for early detection of symptoms. In both severities we see a decrease in performance for real world images and video as measured with the F-1 score. The F-1 score dropped by 32% for pronounced symptoms in real world images (the closest data to the training data) due to a decrease in model recall. If the potential of mobile CNN models are to be realized our data suggest it is crucial to consider tuning recall in order to achieve the desired performance in real world settings. In addition, the varied performance related to different input data (image or video) is an important consideration for design in real world applications.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Prognostic_studies / Screening_studies Language: En Journal: Front Plant Sci Year: 2019 Document type: Article Affiliation country: United States Country of publication: Switzerland

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Prognostic_studies / Screening_studies Language: En Journal: Front Plant Sci Year: 2019 Document type: Article Affiliation country: United States Country of publication: Switzerland