Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 12: 787407, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35111176

RESUMO

Community science image libraries offer a massive, but largely untapped, source of observational data for phenological research. The iNaturalist platform offers a particularly rich archive, containing more than 49 million verifiable, georeferenced, open access images, encompassing seven continents and over 278,000 species. A critical limitation preventing scientists from taking full advantage of this rich data source is labor. Each image must be manually inspected and categorized by phenophase, which is both time-intensive and costly. Consequently, researchers may only be able to use a subset of the total number of images available in the database. While iNaturalist has the potential to yield enough data for high-resolution and spatially extensive studies, it requires more efficient tools for phenological data extraction. A promising solution is automation of the image annotation process using deep learning. Recent innovations in deep learning have made these open-source tools accessible to a general research audience. However, it is unknown whether deep learning tools can accurately and efficiently annotate phenophases in community science images. Here, we train a convolutional neural network (CNN) to annotate images of Alliaria petiolata into distinct phenophases from iNaturalist and compare the performance of the model with non-expert human annotators. We demonstrate that researchers can successfully employ deep learning techniques to extract phenological information from community science images. A CNN classified two-stage phenology (flowering and non-flowering) with 95.9% accuracy and classified four-stage phenology (vegetative, budding, flowering, and fruiting) with 86.4% accuracy. The overall accuracy of the CNN did not differ from humans (p = 0.383), although performance varied across phenophases. We found that a primary challenge of using deep learning for image annotation was not related to the model itself, but instead in the quality of the community science images. Up to 4% of A. petiolata images in iNaturalist were taken from an improper distance, were physically manipulated, or were digitally altered, which limited both human and machine annotators in accurately classifying phenology. Thus, we provide a list of photography guidelines that could be included in community science platforms to inform community scientists in the best practices for creating images that facilitate phenological analysis.

2.
Appl Plant Sci ; 8(4): e11340, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32351801

RESUMO

PREMISE: We developed a novel low-cost method to visually phenotype belowground structures in the plant rhizosphere. We devised the method introduced here to address the difficulties encountered growing plants in seed germination pouches for long-term experiments and the high cost of other mini-rhizotron alternatives. METHODS AND RESULTS: The method described here took inspiration from homemade ant farms commonly used as an educational tool in elementary schools. Using compact disc (CD) cases, we developed mini-rhizotrons for use in the field and laboratory using the burclover Medicago lupulina. CONCLUSIONS: Our method combines the benefits of pots and germination pouches. In CD mini-rhizotrons, plants grew significantly larger than in germination pouches, and unlike pots, it is possible to measure roots without destructive sampling. Our protocol is a cheaper, widely available alternative to more destructive methods, which could facilitate the study of belowground phenotypes and processes by scientists with fewer resources.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA