Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Ecol Evol ; 13(11): e10698, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37953985

ABSTRACT

Human-mediated hybridization between native and non-native species is causing biodiversity loss worldwide. Hybridization has contributed to the extinction of many species through direct and indirect processes such as loss of reproductive opportunity and genetic introgression. Therefore, it is essential to manage hybrids to conserve biodiversity. However, specialized knowledge is required to identify the target species based on visual characteristics when two species have similar features. Although image recognition technology can be a powerful tool for identifying hybrids, studies have yet to utilize deep learning approaches. Hence, this study aimed to identify hybrids between the native Japanese giant salamander (Andrias japonicus) and the non-native Chinese giant salamander (Andrias cf. davidianus) using EfficientNetV2 and smartphone images. We used smartphone images of 11 individuals of native A. japonicus (five training and six test images) and 20 individuals of hybrids between A. japonicus and A. cf. davidianus (five training and 15 test images). In our experimental environment, an AI model constructed with EfficientNetV2 exhibited 100% accuracy in identifying hybrids. In addition, gradient-weighted class activation mapping revealed that the AI model was able to classify A. japonicus and hybrids between A. japonicus and A. cf. davidianus on the basis of the dorsal head spot patterning. Our approach thus enables the identification of hybrids against A. japonicus, which was previously considered difficult by non-experts. Furthermore, since this study achieved reliable identification using smartphone images, it is expected to be applied to a wide range of citizen science projects.

2.
Sci Rep ; 13(1): 16212, 2023 09 27.
Article in English | MEDLINE | ID: mdl-37758778

ABSTRACT

Information obtained via individual identification is invaluable for ecology and conservation. Physical tags, such as PIT tags and GPS, have been used for individual identification; however, these methods could impact on animal behavior and survival rates, and the tags may become lost. Although non-invasive methods that do not affect the target species (such as manual photoidentification) are available, these techniques utilize stripes and spots that are unique to the individual, which requires training, and applying them to large datasets is challenging. Many studies that have applied deep learning for identification have focused on species-level identification, but few have addressed individual-level identification. In this study, we developed an image-based identification method based on deep learning that uses the head spot pattern of the Japanese giant salamander (Andrias japonicus), an endemic and endangered species in Japan. We trained and evaluated a dataset collected over two days from 11 individuals in captivity, which included 7075 images taken by a smartphone camera. Individuals were photographed three times a day at approximately 11:00 (morning), 15:00 (afternoon), and 18:00 (evening). As a result, individual identification by our method, which used the EfficientNetV2 achieved 99.86% accuracy, kappa coefficient of 0.99, and an F1 score of 0.99. Performance was lower for the evening  model than for the morning and afternoon models, which were trained and evaluated using photographs taken at the corresponding time of the day. The proposed method does not require direct contact with the target species, and the effect on the animals is minimal; moreover, individual-level information can be obtained under natural conditions. In the future, smartphone images can be applied to citizen science surveys and individual-level big data collection, which is difficult using current methods.


Subject(s)
Animal Identification Systems , Deep Learning , Smartphone , Urodela , Animals
3.
Breed Sci ; 72(1): 96-106, 2022 Mar.
Article in English | MEDLINE | ID: mdl-36045894

ABSTRACT

Monitoring and detection of invasive alien plant species are necessary for effective management and control measures. Although efforts have been made to detect alien trees using satellite images, the detection of alien herbaceous species has been difficult. In this study, we examined the possibility of detecting non-native plants using deep learning on images captured by two action cameras. We created a model for each camera using the chopped picture method. The models were able to detect the alien plant Solidago altissima (tall goldenrod) and obtained an average accuracy of 89%. This study proved that it is possible to automatically detect exotic plants using inexpensive action cameras through deep learning. This advancement suggests that, in the future, citizen science may be useful for conducting distribution surveys of alien plants in a wide area at a low cost.

4.
Breed Sci ; 72(1): 107-114, 2022 Mar.
Article in English | MEDLINE | ID: mdl-36045898

ABSTRACT

The importance of greenery in urban areas has traditionally been discussed from ecological and esthetic perspectives, as well as in public health and social science fields. The recent advancements in empirical studies were enabled by the combination of 'big data' of streetscapes and automated image recognition. However, the existing methods of automated image recognition for urban greenery have problems such as the confusion of green artificial objects and the excessive cost of model training. To ameliorate the drawbacks of existing methods, this study proposes to apply a patch-based semantic segmentation method for determining the green view index of certain urban areas by using Google Street View imagery and the 'chopped picture method'. We expect that our method will contribute to expanding the scope of studies on urban greenery in various fields.

5.
Sci Rep ; 11(1): 903, 2021 01 13.
Article in English | MEDLINE | ID: mdl-33441689

ABSTRACT

The identification and mapping of trees via remotely sensed data for application in forest management is an active area of research. Previously proposed methods using airborne and hyperspectral sensors can identify tree species with high accuracy but are costly and are thus unsuitable for small-scale forest managers. In this work, we constructed a machine vision system for tree identification and mapping using Red-Green-Blue (RGB) image taken by an unmanned aerial vehicle (UAV) and a convolutional neural network (CNN). In this system, we first calculated the slope from the three-dimensional model obtained by the UAV, and segmented the UAV RGB photograph of the forest into several tree crown objects automatically using colour and three-dimensional information and the slope model, and lastly applied object-based CNN classification for each crown image. This system succeeded in classifying seven tree classes, including several tree species with more than 90% accuracy. The guided gradient-weighted class activation mapping (Guided Grad-CAM) showed that the CNN classified trees according to their shapes and leaf contrasts, which enhances the potential of the system for classifying individual trees with similar colours in a cost-effective manner-a useful feature for forest management.


Subject(s)
Image Processing, Computer-Assisted/methods , Remote Sensing Technology/methods , Trees/classification , Agriculture/methods , Conservation of Natural Resources/methods , Deep Learning , Forests , Neural Networks, Computer
6.
BMC Ecol ; 20(1): 65, 2020 11 27.
Article in English | MEDLINE | ID: mdl-33246473

ABSTRACT

BACKGROUND: Classifying and mapping vegetation are crucial tasks in environmental science and natural resource management. However, these tasks are difficult because conventional methods such as field surveys are highly labor-intensive. Identification of target objects from visual data using computer techniques is one of the most promising techniques to reduce the costs and labor for vegetation mapping. Although deep learning and convolutional neural networks (CNNs) have become a new solution for image recognition and classification recently, in general, detection of ambiguous objects such as vegetation is still difficult. In this study, we investigated the effectiveness of adopting the chopped picture method, a recently described protocol for CNNs, and evaluated the efficiency of CNN for plant community detection from Google Earth images. RESULTS: We selected bamboo forests as the target and obtained Google Earth images from three regions in Japan. By applying CNN, the best trained model correctly detected over 90% of the targets. Our results showed that the identification accuracy of CNN is higher than that of conventional machine learning methods. CONCLUSIONS: Our results demonstrated that CNN and the chopped picture method are potentially powerful tools for high-accuracy automated detection and mapping of vegetation.


Subject(s)
Machine Learning , Neural Networks, Computer , Forests , Japan
7.
Plant Cell Physiol ; 61(11): 1967-1973, 2020 Dec 23.
Article in English | MEDLINE | ID: mdl-32845307

ABSTRACT

Recent rapid progress in deep neural network techniques has allowed recognition and classification of various objects, often exceeding the performance of the human eye. In plant biology and crop sciences, some deep neural network frameworks have been applied mainly for effective and rapid phenotyping. In this study, beyond simple optimizations of phenotyping, we propose an application of deep neural networks to make an image-based internal disorder diagnosis that is hard even for experts, and to visualize the reasons behind each diagnosis to provide biological interpretations. Here, we exemplified classification of calyx-end cracking in persimmon fruit by using five convolutional neural network models with various layer structures and examined potential analytical options involved in the diagnostic qualities. With 3,173 visible RGB images from the fruit apex side, the neural networks successfully made the binary classification of each degree of disorder, with up to 90% accuracy. Furthermore, feature visualizations, such as Grad-CAM and LRP, visualize the regions of the image that contribute to the diagnosis. They suggest that specific patterns of color unevenness, such as in the fruit peripheral area, can be indexes of calyx-end cracking. These results not only provided novel insights into indexes of fruit internal disorders but also proposed the potential applicability of deep neural networks in plant biology.


Subject(s)
Deep Learning , Diospyros , Fruit , Plant Diseases , Diospyros/anatomy & histology , Flowers/anatomy & histology , Fruit/anatomy & histology , Image Interpretation, Computer-Assisted , Neural Networks, Computer
8.
Front Robot AI ; 6: 32, 2019.
Article in English | MEDLINE | ID: mdl-33501048

ABSTRACT

Climate change is undoubtedly one of the biggest problems in the 21st century. Currently, however, most research efforts on climate forecasting are based on mechanistic, bottom-up approaches such as physics-based general circulation models and earth system models. In this study, we explore the performance of a phenomenological, top-down model constructed using a neural network and big data of global mean monthly temperature. By generating graphical images using the monthly temperature data of 30 years, the neural network system successfully predicts the rise and fall of temperatures for the next 10 years. Using LeNet for the convolutional neural network, the accuracy of the best global model is found to be 97.0%; we found that if more training images are used, a higher accuracy can be attained. We also found that the color scheme of the graphical images affects the performance of the model. Moreover, the prediction accuracy differs among climatic zones and temporal ranges. This study illustrated that the performance of the top-down approach is notably high in comparison to the conventional bottom-up approach for decadal-scale forecasting. We suggest using artificial intelligence-based forecasting methods along with conventional physics-based models because these two approaches can work together in a complementary manner.

SELECTION OF CITATIONS
SEARCH DETAIL
...