Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Biol Pharm Bull ; 42(4): 607-616, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30930420

RESUMEN

Liver regeneration is a very complex process and is regulated by several cytokines and growth factors. It is also known that liver transplantation and the regeneration process cause massive oxidative stress, which interferes with liver regeneration. The placenta is known to contain various physiologically active ingredients such as cytokines, growth factors, and amino acids. In particular, human placenta hydrolysate (hPH) has been found to contain many amino acids. Most of the growth factors found in the placenta are known to be closely related to liver regeneration. Therefore, in this study, we investigated whether hPH is effective in promoting liver regeneration in rats undergoing partial hepatectomy. We confirmed that cell proliferation was significantly increased in HepG2 and human primary cells. Hepatocyte proliferation was also promoted in partial hepatectomized rats by hPH treatment. hPH increased liver regeneration rate, double nucleic cell ratio, mitotic cell ratio, proliferating cell nuclear antigen (PCNA), and Ki-67 positive cells in vivo as well as interleukin (IL)-6, tumor necrosis factor alpha (TNF-α), and hepatocyte growth factor (HGF). Moreover, Kupffer cells secreting IL-6 and TNF-α were activated by hPH treatment. In addition, hPH reduced thiobarbituric acid reactive substances (TBARs) and significantly increased glutathione (GSH), glutathione peroxidase (GPx), and superoxide dismutase (SOD). Taken together, these results suggest that hPH promotes liver regeneration by activating cytokines and growth factors associated with liver regeneration and eliminating oxidative stress.


Asunto(s)
Antioxidantes/fisiología , Péptidos y Proteínas de Señalización Intercelular/fisiología , Regeneración Hepática , Placenta , Animales , Línea Celular , Femenino , Hepatectomía , Humanos , Masculino , Estrés Oxidativo , Embarazo , Ratas Sprague-Dawley , Transducción de Señal
2.
Sensors (Basel) ; 19(8)2019 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-31013582

RESUMEN

In the field of Facial Expression Recognition (FER), traditional local texture coding methods have a low computational complexity, while providing a robust solution with respect to occlusion, illumination, and other factors. However, there is still need for improving the accuracy of these methods while maintaining their real-time nature and low computational complexity. In this paper, we propose a feature-based FER system with a novel local texture coding operator, named central symmetric local gradient coding (CS-LGC), to enhance the performance of real-time systems. It uses four different directional gradients on 5 × 5 grids, and the gradient is computed in the center-symmetric way. The averages of the gradients are used to reduce the sensitivity to noise. These characteristics lead to symmetric of features by the CS-LGC operator, thus providing a better generalization capability in comparison to existing local gradient coding (LGC) variants. The proposed system further transforms the extracted features into an eigen-space using a principal component analysis (PCA) for better representation and less computation; it estimates the intended classes by training an extreme learning machine. The recognition rate for the JAFFE database is 95.24%, whereas that for the CK+ database is 98.33%. The results show that the system has advantages over the existing local texture coding methods.


Asunto(s)
Cara/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Algoritmos , Bases de Datos Factuales , Humanos , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos
3.
Sensors (Basel) ; 17(9)2017 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-28869539

RESUMEN

Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called "deep learning meta-architectures". We combine each of these meta-architectures with "deep feature extractors" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.


Asunto(s)
Solanum lycopersicum , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Enfermedades de las Plantas
4.
Sensors (Basel) ; 15(7): 17089-105, 2015 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-26184226

RESUMEN

Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.


Asunto(s)
Biometría , Dedos/irrigación sanguínea , Venas , Humanos
5.
Appl Opt ; 53(20): 4585-93, 2014 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-25090081

RESUMEN

Finger vein images are rich in orientation and edge features. Inspired by the edge histogram descriptor proposed in MPEG-7, this paper presents an efficient orientation-based local descriptor, named histogram of salient edge orientation map (HSEOM). HSEOM is based on the fact that human vision is sensitive to edge features for image perception. For a given image, HSEOM first finds oriented edge maps according to predefined orientations using a well-known edge operator and obtains a salient edge orientation map by choosing an orientation with the maximum edge magnitude for each pixel. Then, subhistograms of the salient edge orientation map are generated from the nonoverlapping submaps and concatenated to build the final HSEOM. In the experiment of this paper, eight oriented edge maps were used to generate a salient edge orientation map for HSEOM construction. Experimental results on our available finger vein image database, MMCBNU_6000, show that the performance of HSEOM outperforms that of state-of-the-art orientation-based methods (e.g., Gabor filter, histogram of oriented gradients, and local directional code). Furthermore, the proposed HSEOM has advantages of low feature dimensionality and fast implementation for a real-time finger vein recognition system.


Asunto(s)
Biometría/métodos , Interpretación Estadística de Datos , Dedos/irrigación sanguínea , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Venas/anatomía & histología , Algoritmos , Gráficos por Computador , Humanos , Análisis Numérico Asistido por Computador
6.
ScientificWorldJournal ; 2014: 105089, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24959598

RESUMEN

In pedestrian detection methods, their high accuracy detection rates are always obtained at the cost of a large amount of false pedestrians. In order to overcome this problem, the authors propose an accurate pedestrian detection system based on two machine learning methods: cascade AdaBoost detector and random vector functional-link net. During the offline training phase, the parameters of a cascade AdaBoost detector and random vector functional-link net are trained by standard dataset. These candidates, extracted by the strategy of a multiscale sliding window, are normalized to be standard scale and verified by the cascade AdaBoost detector and random vector functional-link net on the online phase. Only those candidates with high confidence can pass the validation. The proposed system is more accurate than other single machine learning algorithms with fewer false pedestrians, which has been confirmed in simulation experiment on four datasets.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos
7.
J Clin Med ; 13(16)2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39200905

RESUMEN

Background: Ruptured and unruptured aneurysms are less common in younger individuals compared to older patients. Endovascular treatment has gained popularity over surgical options in the general population, but surgery remains the primary treatment for younger patients due to concerns about higher recurrence rates with endovascular procedures. Methods: This study compared the immediate and long-term outcomes of endovascular treatment in patients under 40 years with those aged 41-60. The study included 239 patients who underwent endovascular treatment for intracranial aneurysms, divided into two age groups: under 40 and 41-60 years. The rates of immediate radiologic outcomes, complications, and recurrence were assessed. Results: The results showed successful aneurysm obliteration rates of 70.1% in the younger group and 64.0% in the older group. The complication rates were 1.5% in the younger group and 3.5% in the older group, with the older group experiencing more procedure-related complications, though this difference was not statistically significant. Long-term follow-up revealed recurrence rates of 23.2% in the younger group and 18.2% in the older group, with no significant difference. Conclusions: The study suggests that endovascular treatment is as effective and safe for patients under 40 years. Therefore, it may be considered an acceptable first-line treatment for younger patients, aligning its use with that in older populations.

8.
Sci Rep ; 14(1): 17900, 2024 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-39095389

RESUMEN

Plant diseases pose significant threats to agriculture, impacting both food safety and public health. Traditional plant disease detection systems are typically limited to recognizing disease categories included in the training dataset, rendering them ineffective against new disease types. Although out-of-distribution (OOD) detection methods have been proposed to address this issue, the impact of fine-tuning paradigms on these methods has been overlooked. This paper focuses on studying the impact of fine-tuning paradigms on the performance of detecting unknown plant diseases. Currently, fine-tuning on visual tasks is mainly divided into visual-based models and visual-language-based models. We first discuss the limitations of large-scale visual language models in this task: textual prompts are difficult to design. To avoid the side effects of textual prompts, we futher explore the effectiveness of purely visual pre-trained models for OOD detection in plant disease tasks. Specifically, we employed five publicly accessible datasets to establish benchmarks for open-set recognition, OOD detection, and few-shot learning in plant disease recognition. Additionally, we comprehensively compared various OOD detection methods, fine-tuning paradigms, and factors affecting OOD detection performance, such as sample quantity. The results show that visual prompt tuning outperforms fully fine-tuning and linear probe tuning in out-of-distribution detection performance, especially in the few-shot scenarios. Notably, the max-logit-based on visual prompt tuning achieves an AUROC score of 94.8 % in the 8-shot setting, which is nearly comparable to the method of fully fine-tuning on the full dataset (95.2 % ), which implies that an appropriate fine-tuning paradigm can directly improve OOD detection performance. Finally, we visualized the prediction distributions of different OOD detection methods and discussed the selection of thresholds. Overall, this work lays the foundation for unknown plant disease recognition, providing strong support for the security and reliability of plant disease recognition systems. We will release our code at https://github.com/JiuqingDong/PDOOD to further advance this field.


Asunto(s)
Enfermedades de las Plantas , Algoritmos
9.
J Dermatolog Treat ; 35(1): 2337908, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38616301

RESUMEN

Background: Scalp-related symptoms such as dandruff and itching are common with diverse underlying etiologies. We previously proposed a novel classification and scoring system for scalp conditions, called the scalp photographic index (SPI); it grades five scalp features using trichoscopic images with good reliability. However, it requires trained evaluators.Aim: To develop artificial intelligence (AI) algorithms for assessment of scalp conditions and to assess the feasibility of AI-based recommendations on personalized scalp cosmetics.Methods: Using EfficientNet, convolutional neural network (CNN) models (SPI-AI) ofeach scalp feature were established. 101,027 magnified scalp images graded according to the SPI scoring were used for training, validation, and testing the model Adults with scalp discomfort were prescribed shampoos and scalp serums personalized according to their SPI-AI-defined scalp types. Using the SPI, the scalp conditions were evaluated at baseline and at weeks 4, 8, and 12 of treatment.Results: The accuracies of the SPI-AI for dryness, oiliness, erythema, folliculitis, and dandruff were 91.3%, 90.5%, 89.6%, 87.3%, and 95.2%, respectively. Overall, 100 individuals completed the 4-week study; 43 of these participated in an extension study until week 12. The total SPI score decreased from 32.70 ± 7.40 at baseline to 15.97 ± 4.68 at week 4 (p < 0.001). The efficacy was maintained throughout 12 weeks.Conclusions: SPI-AI accurately assessed the scalp condition. AI-based prescription of tailored scalp cosmetics could significantly improve scalp health.


Asunto(s)
Cosméticos , Caspa , Adulto , Humanos , Inteligencia Artificial , Cuero Cabelludo , Reproducibilidad de los Resultados , Cosméticos/uso terapéutico , Prescripciones
10.
Sensors (Basel) ; 13(11): 14339-66, 2013 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-24284769

RESUMEN

Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

11.
Front Plant Sci ; 14: 1238722, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37941667

RESUMEN

Previous work on plant disease detection demonstrated that object detectors generally suffer from degraded training data, and annotations with noise may cause the training task to fail. Well-annotated datasets are therefore crucial to build a robust detector. However, a good label set generally requires much expert knowledge and meticulous work, which is expensive and time-consuming. This paper aims to learn robust feature representations with inaccurate bounding boxes, thereby reducing the model requirements for annotation quality. Specifically, we analyze the distribution of noisy annotations in the real world. A teacher-student learning paradigm is proposed to correct inaccurate bounding boxes. The teacher model is used to rectify the degraded bounding boxes, and the student model extracts more robust feature representations from the corrected bounding boxes. Furthermore, the method can be easily generalized to semi-supervised learning paradigms and auto-labeling techniques. Experimental results show that applying our method to the Faster-RCNN detector achieves a 26% performance improvement on the noisy dataset. Besides, our method achieves approximately 75% of the performance of a fully supervised object detector when 1% of the labels are available. Overall, this work provides a robust solution to real-world location noise. It alleviates the challenges posed by noisy data to precision agriculture, optimizes data labeling technology, and encourages practitioners to further investigate plant disease detection and intelligent agriculture at a lower cost. The code will be released at https://github.com/JiuqingDong/TS_OAMIL-for-Plant-disease-detection.

12.
Animals (Basel) ; 13(12)2023 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-37370530

RESUMEN

Cattle behavior recognition is essential for monitoring their health and welfare. Existing techniques for behavior recognition in closed barns typically rely on direct observation to detect changes using wearable devices or surveillance cameras. While promising progress has been made in this field, monitoring individual cattle, especially those with similar visual characteristics, remains challenging due to numerous factors such as occlusion, scale variations, and pose changes. Accurate and consistent individual identification over time is therefore essential to overcome these challenges. To address this issue, this paper introduces an approach for multiview monitoring of individual cattle behavior based on action recognition using video data. The proposed system takes an image sequence as input and utilizes a detector to identify hierarchical actions categorized as part and individual actions. These regions of interest are then inputted into a tracking and identification mechanism, enabling the system to continuously track each individual in the scene and assign them a unique identification number. By implementing this approach, cattle behavior is continuously monitored, and statistical analysis is conducted to assess changes in behavior in the time domain. The effectiveness of the proposed framework is demonstrated through quantitative and qualitative experimental results obtained from our Hanwoo cattle video database. Overall, this study tackles the challenges encountered in real farm indoor scenarios, capturing spatiotemporal information and enabling automatic recognition of cattle behavior for precision livestock farming.

13.
Front Plant Sci ; 14: 1243822, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37849839

RESUMEN

Plant disease detection has made significant strides thanks to the emergence of deep learning. However, existing methods have been limited to closed-set and static learning settings, where models are trained using a specific dataset. This confinement restricts the model's adaptability when encountering samples from unseen disease categories. Additionally, there is a challenge of knowledge degradation for these static learning settings, as the acquisition of new knowledge tends to overwrite the old when learning new categories. To overcome these limitations, this study introduces a novel paradigm for plant disease detection called open-world setting. Our approach can infer disease categories that have never been seen during the model training phase and gradually learn these unseen diseases through dynamic knowledge updates in the next training phase. Specifically, we utilize a well-trained unknown-aware region proposal network to generate pseudo-labels for unknown diseases during training and employ a class-agnostic classifier to enhance the recall rate for unknown diseases. Besides, we employ a sample replay strategy to maintain recognition ability for previously learned classes. Extensive experimental evaluation and ablation studies investigate the efficacy of our method in detecting old and unknown classes. Remarkably, our method demonstrates robust generalization ability even in cross-species disease detection experiments. Overall, this open-world and dynamically updated detection method shows promising potential to become the future paradigm for plant disease detection. We discuss open issues including classification and localization, and propose promising approaches to address them. We encourage further research in the community to tackle the crucial challenges in open-world plant disease detection. The code will be released at https://github.com/JiuqingDong/OWPDD.

14.
Animals (Basel) ; 13(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-38003205

RESUMEN

Accurate identification of individual cattle is of paramount importance in precision livestock farming, enabling the monitoring of cattle behavior, disease prevention, and enhanced animal welfare. Unlike human faces, the faces of most Hanwoo cattle, a native breed of Korea, exhibit significant similarities and have the same body color, posing a substantial challenge in accurately distinguishing between individual cattle. In this study, we sought to extend the closed-set scope (only including identifying known individuals) to a more-adaptable open-set recognition scenario (identifying both known and unknown individuals) termed Cattle's Face Open-Set Recognition (CFOSR). By integrating open-set techniques to enhance the closed-set accuracy, the proposed method simultaneously addresses the open-set scenario. In CFOSR, the objective is to develop a trained model capable of accurately identifying known individuals, while effectively handling unknown or novel individuals, even in cases where the model has been trained solely on known individuals. To address this challenge, we propose a novel approach that integrates Adversarial Reciprocal Points Learning (ARPL), a state-of-the-art open-set recognition method, with the effectiveness of Additive Margin Softmax loss (AM-Softmax). ARPL was leveraged to mitigate the overlap between spaces of known and unknown or unregistered cattle. At the same time, AM-Softmax was chosen over the conventional Cross-Entropy loss (CE) to classify known individuals. The empirical results obtained from a real-world dataset demonstrated the effectiveness of the ARPL and AM-Softmax techniques in achieving both intra-class compactness and inter-class separability. Notably, the results of the open-set recognition and closed-set recognition validated the superior performance of our proposed method compared to existing algorithms. To be more precise, our method achieved an AUROC of 91.84 and an OSCR of 87.85 in the context of open-set recognition on a complex dataset. Simultaneously, it demonstrated an accuracy of 94.46 for closed-set recognition. We believe that our study provides a novel vision to improve the classification accuracy of the closed set. Simultaneously, it holds the potential to significantly contribute to herd monitoring and inventory management, especially in scenarios involving the presence of unknown or novel cattle.

15.
Front Plant Sci ; 14: 1211075, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37711291

RESUMEN

Plant phenotyping is a critical field in agriculture, aiming to understand crop growth under specific conditions. Recent research uses images to describe plant characteristics by detecting visual information within organs such as leaves, flowers, stems, and fruits. However, processing data in real field conditions, with challenges such as image blurring and occlusion, requires improvement. This paper proposes a deep learning-based approach for leaf instance segmentation with a local refinement mechanism to enhance performance in cluttered backgrounds. The refinement mechanism employs Gaussian low-pass and High-boost filters to enhance target instances and can be applied to the training or testing dataset. An instance segmentation architecture generates segmented masks and detected areas, facilitating the derivation of phenotypic information, such as leaf count and size. Experimental results on a tomato leaf dataset demonstrate the system's accuracy in segmenting target leaves despite complex backgrounds. The investigation of the refinement mechanism with different kernel sizes reveals that larger kernel sizes benefit the system's ability to generate more leaf instances when using a High-boost filter, while prediction performance decays with larger Gaussian low-pass filter kernel sizes. This research addresses challenges in real greenhouse scenarios and enables automatic recognition of phenotypic data for smart agriculture. The proposed approach has the potential to enhance agricultural practices, ultimately leading to improved crop yields and productivity.

16.
Front Plant Sci ; 14: 1225409, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37810377

RESUMEN

Recent advancements in deep learning have brought significant improvements to plant disease recognition. However, achieving satisfactory performance often requires high-quality training datasets, which are challenging and expensive to collect. Consequently, the practical application of current deep learning-based methods in real-world scenarios is hindered by the scarcity of high-quality datasets. In this paper, we argue that embracing poor datasets is viable and aims to explicitly define the challenges associated with using these datasets. To delve into this topic, we analyze the characteristics of high-quality datasets, namely, large-scale images and desired annotation, and contrast them with the limited and imperfect nature of poor datasets. Challenges arise when the training datasets deviate from these characteristics. To provide a comprehensive understanding, we propose a novel and informative taxonomy that categorizes these challenges. Furthermore, we offer a brief overview of existing studies and approaches that address these challenges. We point out that our paper sheds light on the importance of embracing poor datasets, enhances the understanding of the associated challenges, and contributes to the ambitious objective of deploying deep learning in real-world applications. To facilitate the progress, we finally describe several outstanding questions and point out potential future directions. Although our primary focus is on plant disease recognition, we emphasize that the principles of embracing and analyzing poor datasets are applicable to a wider range of domains, including agriculture. Our project is public available at https://github.com/xml94/EmbracingLimitedImperfectTrainingDatasets.

17.
Front Plant Sci ; 13: 1010981, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36507376

RESUMEN

Deep learning has witnessed a significant improvement in recent years to recognize plant diseases by observing their corresponding images. To have a decent performance, current deep learning models tend to require a large-scale dataset. However, collecting a dataset is expensive and time-consuming. Hence, the limited data is one of the main challenges to getting the desired recognition accuracy. Although transfer learning is heavily discussed and verified as an effective and efficient method to mitigate the challenge, most proposed methods focus on one or two specific datasets. In this paper, we propose a novel transfer learning strategy to have a high performance for versatile plant disease recognition, on multiple plant disease datasets. Our transfer learning strategy differs from the current popular one due to the following factors. First, PlantCLEF2022, a large-scale dataset related to plants with 2,885,052 images and 80,000 classes, is utilized to pre-train a model. Second, we adopt a vision transformer (ViT) model, instead of a convolution neural network. Third, the ViT model undergoes transfer learning twice to save computations. Fourth, the model is first pre-trained in ImageNet with a self-supervised loss function and with a supervised loss function in PlantCLEF2022. We apply our method to 12 plant disease datasets and the experimental results suggest that our method surpasses the popular one by a clear margin for different dataset settings. Specifically, our proposed method achieves a mean testing accuracy of 86.29over the 12 datasets in a 20-shot case, 12.76 higher than the current state-of-the-art method's accuracy of 73.53. Furthermore, our method outperforms other methods in one plant growth stage prediction and the one weed recognition dataset. To encourage the community and related applications, we have made public our codes and pre-trained model.

18.
Front Plant Sci ; 13: 989304, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36172552

RESUMEN

Predicting plant growth is a fundamental challenge that can be employed to analyze plants and further make decisions to have healthy plants with high yields. Deep learning has recently been showing its potential to address this challenge in recent years, however, there are still two issues. First, image-based plant growth prediction is currently taken either from time series or image generation viewpoints, resulting in a flexible learning framework and clear predictions, respectively. Second, deep learning-based algorithms are notorious to require a large-scale dataset to obtain a competing performance but collecting enough data is time-consuming and expensive. To address the issues, we consider the plant growth prediction from both viewpoints with two new time-series data augmentation algorithms. To be more specific, we raise a new framework with a length-changeable time-series processing unit to generate images flexibly. A generative adversarial loss is utilized to optimize our model to obtain high-quality images. Furthermore, we first recognize three key points to perform time-series data augmentation and then put forward T-Mixup and T-Copy-Paste. T-Mixup fuses images from a different time pixel-wise while T-Copy-Paste makes new time-series images with a different background by reusing individual leaves extracted from the existing dataset. We perform our method in a public dataset and achieve superior results, such as the generated RGB images and instance masks securing an average PSNR of 27.53 and 27.62, respectively, compared to the previously best 26.55 and 26.92.

19.
Front Plant Sci ; 13: 1037655, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37082512

RESUMEN

Object detection models have become the current tool of choice for plant disease detection in precision agriculture. Most existing research improved the performance by ameliorating networks and optimizing the loss function. However, because of the vast influence of data annotation quality and the cost of annotation, the data-centric part of a project also needs more investigation. We should further consider the relationship between data annotation strategies, annotation quality, and the model's performance. In this paper, a systematic strategy with four annotation strategies for plant disease detection is proposed: local, semi-global, global, and symptom-adaptive annotation. Labels with different annotation strategies will result in distinct models' performance, and their contrasts are remarkable. An interpretability study of the annotation strategy is conducted by using class activation maps. In addition, we define five types of inconsistencies in the annotation process and investigate the severity of the impact of inconsistent labels on model's performance. Finally, we discuss the problem of label inconsistency during data augmentation. Overall, this data-centric quantitative analysis helps us to understand the significance of annotation strategies, which provides practitioners a way to obtain higher performance and reduce annotation costs on plant disease detection. Our work encourages researchers to pay more attention to annotation consistency and the essential issues of annotation strategy. The code will be released at: https://github.com/JiuqingDong/PlantDiseaseDetection_Yolov5 .

20.
Front Plant Sci ; 12: 758027, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34956261

RESUMEN

Recent advances in automatic recognition systems based on deep learning technology have shown the potential to provide environmental-friendly plant disease monitoring. These systems are able to reliably distinguish plant anomalies under varying environmental conditions as the basis for plant intervention using methods such as classification or detection. However, they often show a performance decay when applied under new field conditions and unseen data. Therefore, in this article, we propose an approach based on the concept of open-set domain adaptation to the task of plant disease recognition to allow existing systems to operate in new environments with unseen conditions and farms. Our system specifically copes diagnosis as an open set learning problem, and mainly operates in the target domain by exploiting a precise estimation of unknown data while maintaining the performance of the known classes. The main framework consists of two modules based on deep learning that perform bounding box detection and open set self and across domain adaptation. The detector is built based on our previous filter bank architecture for plant diseases recognition and enforces domain adaptation from the source to the target domain, by constraining data to be classified as one of the target classes or labeled as unknown otherwise. We perform an extensive evaluation on our tomato plant diseases dataset with three different domain farms, which indicates that our approach can efficiently cope with changes of new field environments during field-testing and observe consistent gains from explicit modeling of unseen data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA