Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
J Dermatolog Treat ; 35(1): 2337908, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38616301

RESUMEN

Background: Scalp-related symptoms such as dandruff and itching are common with diverse underlying etiologies. We previously proposed a novel classification and scoring system for scalp conditions, called the scalp photographic index (SPI); it grades five scalp features using trichoscopic images with good reliability. However, it requires trained evaluators.Aim: To develop artificial intelligence (AI) algorithms for assessment of scalp conditions and to assess the feasibility of AI-based recommendations on personalized scalp cosmetics.Methods: Using EfficientNet, convolutional neural network (CNN) models (SPI-AI) ofeach scalp feature were established. 101,027 magnified scalp images graded according to the SPI scoring were used for training, validation, and testing the model Adults with scalp discomfort were prescribed shampoos and scalp serums personalized according to their SPI-AI-defined scalp types. Using the SPI, the scalp conditions were evaluated at baseline and at weeks 4, 8, and 12 of treatment.Results: The accuracies of the SPI-AI for dryness, oiliness, erythema, folliculitis, and dandruff were 91.3%, 90.5%, 89.6%, 87.3%, and 95.2%, respectively. Overall, 100 individuals completed the 4-week study; 43 of these participated in an extension study until week 12. The total SPI score decreased from 32.70 ± 7.40 at baseline to 15.97 ± 4.68 at week 4 (p < 0.001). The efficacy was maintained throughout 12 weeks.Conclusions: SPI-AI accurately assessed the scalp condition. AI-based prescription of tailored scalp cosmetics could significantly improve scalp health.


Asunto(s)
Cosméticos , Caspa , Adulto , Humanos , Inteligencia Artificial , Cuero Cabelludo , Reproducibilidad de los Resultados , Cosméticos/uso terapéutico , Prescripciones
2.
Front Plant Sci ; 14: 1238722, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37941667

RESUMEN

Previous work on plant disease detection demonstrated that object detectors generally suffer from degraded training data, and annotations with noise may cause the training task to fail. Well-annotated datasets are therefore crucial to build a robust detector. However, a good label set generally requires much expert knowledge and meticulous work, which is expensive and time-consuming. This paper aims to learn robust feature representations with inaccurate bounding boxes, thereby reducing the model requirements for annotation quality. Specifically, we analyze the distribution of noisy annotations in the real world. A teacher-student learning paradigm is proposed to correct inaccurate bounding boxes. The teacher model is used to rectify the degraded bounding boxes, and the student model extracts more robust feature representations from the corrected bounding boxes. Furthermore, the method can be easily generalized to semi-supervised learning paradigms and auto-labeling techniques. Experimental results show that applying our method to the Faster-RCNN detector achieves a 26% performance improvement on the noisy dataset. Besides, our method achieves approximately 75% of the performance of a fully supervised object detector when 1% of the labels are available. Overall, this work provides a robust solution to real-world location noise. It alleviates the challenges posed by noisy data to precision agriculture, optimizes data labeling technology, and encourages practitioners to further investigate plant disease detection and intelligent agriculture at a lower cost. The code will be released at https://github.com/JiuqingDong/TS_OAMIL-for-Plant-disease-detection.

3.
Animals (Basel) ; 13(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-38003205

RESUMEN

Accurate identification of individual cattle is of paramount importance in precision livestock farming, enabling the monitoring of cattle behavior, disease prevention, and enhanced animal welfare. Unlike human faces, the faces of most Hanwoo cattle, a native breed of Korea, exhibit significant similarities and have the same body color, posing a substantial challenge in accurately distinguishing between individual cattle. In this study, we sought to extend the closed-set scope (only including identifying known individuals) to a more-adaptable open-set recognition scenario (identifying both known and unknown individuals) termed Cattle's Face Open-Set Recognition (CFOSR). By integrating open-set techniques to enhance the closed-set accuracy, the proposed method simultaneously addresses the open-set scenario. In CFOSR, the objective is to develop a trained model capable of accurately identifying known individuals, while effectively handling unknown or novel individuals, even in cases where the model has been trained solely on known individuals. To address this challenge, we propose a novel approach that integrates Adversarial Reciprocal Points Learning (ARPL), a state-of-the-art open-set recognition method, with the effectiveness of Additive Margin Softmax loss (AM-Softmax). ARPL was leveraged to mitigate the overlap between spaces of known and unknown or unregistered cattle. At the same time, AM-Softmax was chosen over the conventional Cross-Entropy loss (CE) to classify known individuals. The empirical results obtained from a real-world dataset demonstrated the effectiveness of the ARPL and AM-Softmax techniques in achieving both intra-class compactness and inter-class separability. Notably, the results of the open-set recognition and closed-set recognition validated the superior performance of our proposed method compared to existing algorithms. To be more precise, our method achieved an AUROC of 91.84 and an OSCR of 87.85 in the context of open-set recognition on a complex dataset. Simultaneously, it demonstrated an accuracy of 94.46 for closed-set recognition. We believe that our study provides a novel vision to improve the classification accuracy of the closed set. Simultaneously, it holds the potential to significantly contribute to herd monitoring and inventory management, especially in scenarios involving the presence of unknown or novel cattle.

4.
Front Plant Sci ; 14: 1243822, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37849839

RESUMEN

Plant disease detection has made significant strides thanks to the emergence of deep learning. However, existing methods have been limited to closed-set and static learning settings, where models are trained using a specific dataset. This confinement restricts the model's adaptability when encountering samples from unseen disease categories. Additionally, there is a challenge of knowledge degradation for these static learning settings, as the acquisition of new knowledge tends to overwrite the old when learning new categories. To overcome these limitations, this study introduces a novel paradigm for plant disease detection called open-world setting. Our approach can infer disease categories that have never been seen during the model training phase and gradually learn these unseen diseases through dynamic knowledge updates in the next training phase. Specifically, we utilize a well-trained unknown-aware region proposal network to generate pseudo-labels for unknown diseases during training and employ a class-agnostic classifier to enhance the recall rate for unknown diseases. Besides, we employ a sample replay strategy to maintain recognition ability for previously learned classes. Extensive experimental evaluation and ablation studies investigate the efficacy of our method in detecting old and unknown classes. Remarkably, our method demonstrates robust generalization ability even in cross-species disease detection experiments. Overall, this open-world and dynamically updated detection method shows promising potential to become the future paradigm for plant disease detection. We discuss open issues including classification and localization, and propose promising approaches to address them. We encourage further research in the community to tackle the crucial challenges in open-world plant disease detection. The code will be released at https://github.com/JiuqingDong/OWPDD.

5.
Front Plant Sci ; 14: 1225409, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37810377

RESUMEN

Recent advancements in deep learning have brought significant improvements to plant disease recognition. However, achieving satisfactory performance often requires high-quality training datasets, which are challenging and expensive to collect. Consequently, the practical application of current deep learning-based methods in real-world scenarios is hindered by the scarcity of high-quality datasets. In this paper, we argue that embracing poor datasets is viable and aims to explicitly define the challenges associated with using these datasets. To delve into this topic, we analyze the characteristics of high-quality datasets, namely, large-scale images and desired annotation, and contrast them with the limited and imperfect nature of poor datasets. Challenges arise when the training datasets deviate from these characteristics. To provide a comprehensive understanding, we propose a novel and informative taxonomy that categorizes these challenges. Furthermore, we offer a brief overview of existing studies and approaches that address these challenges. We point out that our paper sheds light on the importance of embracing poor datasets, enhances the understanding of the associated challenges, and contributes to the ambitious objective of deploying deep learning in real-world applications. To facilitate the progress, we finally describe several outstanding questions and point out potential future directions. Although our primary focus is on plant disease recognition, we emphasize that the principles of embracing and analyzing poor datasets are applicable to a wider range of domains, including agriculture. Our project is public available at https://github.com/xml94/EmbracingLimitedImperfectTrainingDatasets.

6.
Front Plant Sci ; 14: 1211075, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37711291

RESUMEN

Plant phenotyping is a critical field in agriculture, aiming to understand crop growth under specific conditions. Recent research uses images to describe plant characteristics by detecting visual information within organs such as leaves, flowers, stems, and fruits. However, processing data in real field conditions, with challenges such as image blurring and occlusion, requires improvement. This paper proposes a deep learning-based approach for leaf instance segmentation with a local refinement mechanism to enhance performance in cluttered backgrounds. The refinement mechanism employs Gaussian low-pass and High-boost filters to enhance target instances and can be applied to the training or testing dataset. An instance segmentation architecture generates segmented masks and detected areas, facilitating the derivation of phenotypic information, such as leaf count and size. Experimental results on a tomato leaf dataset demonstrate the system's accuracy in segmenting target leaves despite complex backgrounds. The investigation of the refinement mechanism with different kernel sizes reveals that larger kernel sizes benefit the system's ability to generate more leaf instances when using a High-boost filter, while prediction performance decays with larger Gaussian low-pass filter kernel sizes. This research addresses challenges in real greenhouse scenarios and enables automatic recognition of phenotypic data for smart agriculture. The proposed approach has the potential to enhance agricultural practices, ultimately leading to improved crop yields and productivity.

7.
Animals (Basel) ; 13(12)2023 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-37370530

RESUMEN

Cattle behavior recognition is essential for monitoring their health and welfare. Existing techniques for behavior recognition in closed barns typically rely on direct observation to detect changes using wearable devices or surveillance cameras. While promising progress has been made in this field, monitoring individual cattle, especially those with similar visual characteristics, remains challenging due to numerous factors such as occlusion, scale variations, and pose changes. Accurate and consistent individual identification over time is therefore essential to overcome these challenges. To address this issue, this paper introduces an approach for multiview monitoring of individual cattle behavior based on action recognition using video data. The proposed system takes an image sequence as input and utilizes a detector to identify hierarchical actions categorized as part and individual actions. These regions of interest are then inputted into a tracking and identification mechanism, enabling the system to continuously track each individual in the scene and assign them a unique identification number. By implementing this approach, cattle behavior is continuously monitored, and statistical analysis is conducted to assess changes in behavior in the time domain. The effectiveness of the proposed framework is demonstrated through quantitative and qualitative experimental results obtained from our Hanwoo cattle video database. Overall, this study tackles the challenges encountered in real farm indoor scenarios, capturing spatiotemporal information and enabling automatic recognition of cattle behavior for precision livestock farming.

9.
Front Plant Sci ; 13: 1010981, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36507376

RESUMEN

Deep learning has witnessed a significant improvement in recent years to recognize plant diseases by observing their corresponding images. To have a decent performance, current deep learning models tend to require a large-scale dataset. However, collecting a dataset is expensive and time-consuming. Hence, the limited data is one of the main challenges to getting the desired recognition accuracy. Although transfer learning is heavily discussed and verified as an effective and efficient method to mitigate the challenge, most proposed methods focus on one or two specific datasets. In this paper, we propose a novel transfer learning strategy to have a high performance for versatile plant disease recognition, on multiple plant disease datasets. Our transfer learning strategy differs from the current popular one due to the following factors. First, PlantCLEF2022, a large-scale dataset related to plants with 2,885,052 images and 80,000 classes, is utilized to pre-train a model. Second, we adopt a vision transformer (ViT) model, instead of a convolution neural network. Third, the ViT model undergoes transfer learning twice to save computations. Fourth, the model is first pre-trained in ImageNet with a self-supervised loss function and with a supervised loss function in PlantCLEF2022. We apply our method to 12 plant disease datasets and the experimental results suggest that our method surpasses the popular one by a clear margin for different dataset settings. Specifically, our proposed method achieves a mean testing accuracy of 86.29over the 12 datasets in a 20-shot case, 12.76 higher than the current state-of-the-art method's accuracy of 73.53. Furthermore, our method outperforms other methods in one plant growth stage prediction and the one weed recognition dataset. To encourage the community and related applications, we have made public our codes and pre-trained model.

10.
Front Plant Sci ; 13: 989304, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36172552

RESUMEN

Predicting plant growth is a fundamental challenge that can be employed to analyze plants and further make decisions to have healthy plants with high yields. Deep learning has recently been showing its potential to address this challenge in recent years, however, there are still two issues. First, image-based plant growth prediction is currently taken either from time series or image generation viewpoints, resulting in a flexible learning framework and clear predictions, respectively. Second, deep learning-based algorithms are notorious to require a large-scale dataset to obtain a competing performance but collecting enough data is time-consuming and expensive. To address the issues, we consider the plant growth prediction from both viewpoints with two new time-series data augmentation algorithms. To be more specific, we raise a new framework with a length-changeable time-series processing unit to generate images flexibly. A generative adversarial loss is utilized to optimize our model to obtain high-quality images. Furthermore, we first recognize three key points to perform time-series data augmentation and then put forward T-Mixup and T-Copy-Paste. T-Mixup fuses images from a different time pixel-wise while T-Copy-Paste makes new time-series images with a different background by reusing individual leaves extracted from the existing dataset. We perform our method in a public dataset and achieve superior results, such as the generated RGB images and instance masks securing an average PSNR of 27.53 and 27.62, respectively, compared to the previously best 26.55 and 26.92.

12.
Front Plant Sci ; 13: 1037655, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37082512

RESUMEN

Object detection models have become the current tool of choice for plant disease detection in precision agriculture. Most existing research improved the performance by ameliorating networks and optimizing the loss function. However, because of the vast influence of data annotation quality and the cost of annotation, the data-centric part of a project also needs more investigation. We should further consider the relationship between data annotation strategies, annotation quality, and the model's performance. In this paper, a systematic strategy with four annotation strategies for plant disease detection is proposed: local, semi-global, global, and symptom-adaptive annotation. Labels with different annotation strategies will result in distinct models' performance, and their contrasts are remarkable. An interpretability study of the annotation strategy is conducted by using class activation maps. In addition, we define five types of inconsistencies in the annotation process and investigate the severity of the impact of inconsistent labels on model's performance. Finally, we discuss the problem of label inconsistency during data augmentation. Overall, this data-centric quantitative analysis helps us to understand the significance of annotation strategies, which provides practitioners a way to obtain higher performance and reduce annotation costs on plant disease detection. Our work encourages researchers to pay more attention to annotation consistency and the essential issues of annotation strategy. The code will be released at: https://github.com/JiuqingDong/PlantDiseaseDetection_Yolov5 .

13.
Front Plant Sci ; 12: 758027, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34956261

RESUMEN

Recent advances in automatic recognition systems based on deep learning technology have shown the potential to provide environmental-friendly plant disease monitoring. These systems are able to reliably distinguish plant anomalies under varying environmental conditions as the basis for plant intervention using methods such as classification or detection. However, they often show a performance decay when applied under new field conditions and unseen data. Therefore, in this article, we propose an approach based on the concept of open-set domain adaptation to the task of plant disease recognition to allow existing systems to operate in new environments with unseen conditions and farms. Our system specifically copes diagnosis as an open set learning problem, and mainly operates in the target domain by exploiting a precise estimation of unknown data while maintaining the performance of the known classes. The main framework consists of two modules based on deep learning that perform bounding box detection and open set self and across domain adaptation. The detector is built based on our previous filter bank architecture for plant diseases recognition and enforces domain adaptation from the source to the target domain, by constraining data to be classified as one of the target classes or labeled as unknown otherwise. We perform an extensive evaluation on our tomato plant diseases dataset with three different domain farms, which indicates that our approach can efficiently cope with changes of new field environments during field-testing and observe consistent gains from explicit modeling of unseen data.

14.
Front Plant Sci ; 12: 773142, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35197989

RESUMEN

Deep learning shows its advantages and potentials in plant disease recognition and has witnessed a profound development in recent years. To obtain a competing performance with a deep learning algorithm, enough amount of annotated data is requested but in the natural world, scarce or imbalanced data are common, and annotated data is expensive or hard to collect. Data augmentation, aiming to create variations for training data, has shown its power for this issue. But there are still two challenges: creating more desirable variations for scarce and imbalanced data, and designing a data augmentation to ease object detection and instance segmentation. First, current algorithms made variations only inside one specific class, but more desirable variations can further promote performance. To address this issue, we propose a novel data augmentation paradigm that can adapt variations from one class to another. In the novel paradigm, an image in the source domain is translated into the target domain, while the variations unrelated to the domain are maintained. For example, an image with a healthy tomato leaf is translated into a powdery mildew image but the variations of the healthy leaf are maintained and transferred into the powdery mildew class, such as types of tomato leaf, sizes, and viewpoints. Second, current data augmentation is suitable to promote the image classification model but may not be appropriate to alleviate object detection and instance segmentation model, mainly because the necessary annotations can not be obtained. In this study, we leverage a prior mask as input to tell the area we are interested in and reuse the original annotations. In this way, our proposed algorithm can be utilized to do the three tasks simultaneously. Further, We collect 1,258 images of tomato leaves with 1,429 instance segmentation annotations as there is more than one instance in one single image, including five diseases and healthy leaves. Extensive experimental results on the collected images validate that our new data augmentation algorithm makes useful variations and contributes to improving performance for diverse deep learning-based methods.

15.
Front Plant Sci ; 12: 682230, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34975931

RESUMEN

Recognizing plant diseases is a major challenge in agriculture, and recent works based on deep learning have shown high efficiency in addressing problems directly related to this area. Nonetheless, weak performance has been observed when a model trained on a particular dataset is evaluated in new greenhouse environments. Therefore, in this work, we take a step towards these issues and present a strategy to improve model accuracy by applying techniques that can help refine the model's generalization capability to deal with complex changes in new greenhouse environments. We propose a paradigm called "control to target classes." The core of our approach is to train and validate a deep learning-based detector using target and control classes on images collected in various greenhouses. Then, we apply the generated features for testing the inference of the system on data from new greenhouse conditions where the goal is to detect target classes exclusively. Therefore, by having explicit control over inter- and intra-class variations, our model can distinguish data variations that make the system more robust when applied to new scenarios. Experiments demonstrate the effectiveness and efficiency of the proposed approach on our extended tomato plant diseases dataset with 14 classes, from which 5 are target classes and the rest are control classes. Our detector achieves a recognition rate of target classes of 93.37% mean average precision on the inference dataset. Finally, we believe that our study offers valuable guidelines for researchers working in plant disease recognition with complex input data.

16.
Front Plant Sci ; 10: 1321, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31798598

RESUMEN

Recent advances in Deep Neural Networks have allowed the development of efficient and automated diagnosis systems for plant anomalies recognition. Although existing methods have shown promising results, they present several limitations to provide an appropriate characterization of the problem, especially in real-field scenarios. To address this limitation, we propose an approach that besides being able to efficiently detect and localize plant anomalies, allows to generate more detailed information about their symptoms and interactions with the scene, by combining visual object recognition and language generation. It uses an image as input and generates a diagnosis result that shows the location of anomalies and sentences describing the symptoms as output. Our framework is divided into two main parts: First, a detector obtains a set of region features that contain the anomalies using a Region-based Deep Neural Network. Second, a language generator takes the features of the detector as input and generates descriptive sentences with details of the symptoms using Long-Short Term Memory (LSTM). Our loss metric allows the system to be trained end-to-end from the object detector to the language generator. Finally, the system outputs a set of bounding boxes along with the sentences that describe their symptoms using glocal criteria into two different ways: a set of specific descriptions of the anomalies detected in the plant and an abstract description that provides general information about the scene. We demonstrate the efficiency of our approach in the challenging tomato diseases and pests recognition task. We further show that our approach achieves a mean Average Precision (mAP) of 92.5% in our newly created Tomato Plant Anomalies Description Dataset. Our objective evaluation allows users to understand the relationships between pathologies and their evolution throughout their stage of infection, location in the plant, symptoms, etc. Our work introduces a cost-efficient tool that provides farmers with a technology that facilitates proper handling of crops.

17.
Sensors (Basel) ; 19(8)2019 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-31013582

RESUMEN

In the field of Facial Expression Recognition (FER), traditional local texture coding methods have a low computational complexity, while providing a robust solution with respect to occlusion, illumination, and other factors. However, there is still need for improving the accuracy of these methods while maintaining their real-time nature and low computational complexity. In this paper, we propose a feature-based FER system with a novel local texture coding operator, named central symmetric local gradient coding (CS-LGC), to enhance the performance of real-time systems. It uses four different directional gradients on 5 × 5 grids, and the gradient is computed in the center-symmetric way. The averages of the gradients are used to reduce the sensitivity to noise. These characteristics lead to symmetric of features by the CS-LGC operator, thus providing a better generalization capability in comparison to existing local gradient coding (LGC) variants. The proposed system further transforms the extracted features into an eigen-space using a principal component analysis (PCA) for better representation and less computation; it estimates the intended classes by training an extreme learning machine. The recognition rate for the JAFFE database is 95.24%, whereas that for the CK+ database is 98.33%. The results show that the system has advantages over the existing local texture coding methods.


Asunto(s)
Cara/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Algoritmos , Bases de Datos Factuales , Humanos , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos
18.
Biol Pharm Bull ; 42(4): 607-616, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30930420

RESUMEN

Liver regeneration is a very complex process and is regulated by several cytokines and growth factors. It is also known that liver transplantation and the regeneration process cause massive oxidative stress, which interferes with liver regeneration. The placenta is known to contain various physiologically active ingredients such as cytokines, growth factors, and amino acids. In particular, human placenta hydrolysate (hPH) has been found to contain many amino acids. Most of the growth factors found in the placenta are known to be closely related to liver regeneration. Therefore, in this study, we investigated whether hPH is effective in promoting liver regeneration in rats undergoing partial hepatectomy. We confirmed that cell proliferation was significantly increased in HepG2 and human primary cells. Hepatocyte proliferation was also promoted in partial hepatectomized rats by hPH treatment. hPH increased liver regeneration rate, double nucleic cell ratio, mitotic cell ratio, proliferating cell nuclear antigen (PCNA), and Ki-67 positive cells in vivo as well as interleukin (IL)-6, tumor necrosis factor alpha (TNF-α), and hepatocyte growth factor (HGF). Moreover, Kupffer cells secreting IL-6 and TNF-α were activated by hPH treatment. In addition, hPH reduced thiobarbituric acid reactive substances (TBARs) and significantly increased glutathione (GSH), glutathione peroxidase (GPx), and superoxide dismutase (SOD). Taken together, these results suggest that hPH promotes liver regeneration by activating cytokines and growth factors associated with liver regeneration and eliminating oxidative stress.


Asunto(s)
Antioxidantes/fisiología , Péptidos y Proteínas de Señalización Intercelular/fisiología , Regeneración Hepática , Placenta , Animales , Línea Celular , Femenino , Hepatectomía , Humanos , Masculino , Estrés Oxidativo , Embarazo , Ratas Sprague-Dawley , Transducción de Señal
19.
Front Plant Sci ; 9: 1162, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30210509

RESUMEN

A fundamental problem that confronts deep neural networks is the requirement of a large amount of data for a system to be efficient in complex applications. Promising results of this problem are made possible through the use of techniques such as data augmentation or transfer learning of pre-trained models in large datasets. But the problem still persists when the application provides limited or unbalanced data. In addition, the number of false positives resulting from training a deep model significantly cause a negative impact on the performance of the system. This study aims to address the problem of false positives and class unbalance by implementing a Refinement Filter Bank framework for Tomato Plant Diseases and Pests Recognition. The system consists of three main units: First, a Primary Diagnosis Unit (Bounding Box Generator) generates the bounding boxes that contain the location of the infected area and class. The promising boxes belonging to each class are then used as input to a Secondary Diagnosis Unit (CNN Filter Bank) for verification. In this second unit, misclassified samples are filtered through the training of independent CNN classifiers for each class. The result of the CNN Filter Bank is a decision of whether a target belongs to the category as it was detected (True) or not (False) otherwise. Finally, an integration unit combines the information from the primary and secondary units while keeping the True Positive samples and eliminating the False Positives that were misclassified in the first unit. By this implementation, the proposed approach is able to obtain a recognition rate of approximately 96%, which represents an improvement of 13% compared to our previous work in the complex task of tomato diseases and pest recognition. Furthermore, our system is able to deal with the false positives generated by the bounding box generator, and class unbalances that appear especially on datasets with limited data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...