Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 18568, 2022 11 03.
Artigo em Inglês | MEDLINE | ID: mdl-36329073

RESUMO

Tomato plants' disease detection and classification at the earliest stage can save the farmers from expensive crop sprays and can assist in increasing the food quantity. Although, extensive work has been presented by the researcher for the tomato plant disease classification, however, the timely localization and identification of various tomato leaf diseases is a complex job as a consequence of the huge similarity among the healthy and affected portion of plant leaves. Furthermore, the low contrast information between the background and foreground of the suspected sample has further complicated the plant leaf disease detection process. To deal with the aforementioned challenges, we have presented a robust deep learning (DL)-based approach namely ResNet-34-based Faster-RCNN for tomato plant leaf disease classification. The proposed method includes three basic steps. Firstly, we generate the annotations of the suspected images to specify the region of interest (RoI). In the next step, we have introduced ResNet-34 along with Convolutional Block Attention Module (CBAM) as a feature extractor module of Faster-RCNN to extract the deep key points. Finally, the calculated features are utilized for the Faster-RCNN model training to locate and categorize the numerous tomato plant leaf anomalies. We tested the presented work on an accessible standard database, the PlantVillage Kaggle dataset. More specifically, we have obtained the mAP and accuracy values of 0.981, and 99.97% respectively along with the test time of 0.23 s. Both qualitative and quantitative results confirm that the presented solution is robust to the detection of plant leaf disease and can replace the manual systems. Moreover, the proposed method shows a low-cost solution to tomato leaf disease classification which is robust to several image transformations like the variations in the size, color, and orientation of the leaf diseased portion. Furthermore, the framework can locate the affected plant leaves under the occurrence of blurring, noise, chrominance, and brightness variations. We have confirmed through the reported results that our approach is robust to several tomato leaf diseases classification under the varying image capturing conditions. In the future, we plan to extend our approach to apply it to other parts of plants as well.


Assuntos
Aprendizado Profundo , Solanum lycopersicum , Doenças das Plantas , Folhas de Planta
2.
Front Plant Sci ; 13: 1003152, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36311068

RESUMO

Maize leaf disease significantly reduces the quality and overall crop yield. Therefore, it is crucial to monitor and diagnose illnesses during the growth season to take necessary actions. However, accurate identification is challenging to achieve as the existing automated methods are computationally complex or perform well on images with a simple background. Whereas, the realistic field conditions include a lot of background noise that makes this task difficult. In this study, we presented an end-to-end learning CNN architecture, Efficient Attention Network (EANet) based on the EfficientNetv2 model to identify multi-class maize crop diseases. To further enhance the capacity of the feature representation, we introduced a spatial-channel attention mechanism to focus on affected locations and help the detection network accurately recognize multiple diseases. We trained the EANet model using focal loss to overcome class-imbalanced data issues and transfer learning to enhance network generalization. We evaluated the presented approach on the publically available datasets having samples captured under various challenging environmental conditions such as varying background, non-uniform light, and chrominance variances. Our approach showed an overall accuracy of 99.89% for the categorization of various maize crop diseases. The experimental and visual findings reveal that our model shows improved performance compared to conventional CNNs, and the attention mechanism properly accentuates the disease-relevant information by ignoring the background noise.

3.
Front Plant Sci ; 13: 808380, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35755664

RESUMO

The role of agricultural development is very important in the economy of a country. However, the occurrence of several plant diseases is a major hindrance to the growth rate and quality of crops. The exact determination and categorization of crop leaf diseases is a complex and time-required activity due to the occurrence of low contrast information in the input samples. Moreover, the alterations in the size, location, structure of crop diseased portion, and existence of noise and blurriness effect in the input images further complicate the classification task. To solve the problems of existing techniques, a robust drone-based deep learning approach is proposed. More specifically, we have introduced an improved EfficientNetV2-B4 with additional added dense layers at the end of the architecture. The customized EfficientNetV2-B4 calculates the deep key points and classifies them in their related classes by utilizing an end-to-end training architecture. For performance evaluation, a standard dataset, namely, the PlantVillage Kaggle along with the samples captured using a drone is used which is complicated in the aspect of varying image samples with diverse image capturing conditions. We attained the average precision, recall, and accuracy values of 99.63, 99.93, and 99.99%, respectively. The obtained results confirm the robustness of our approach in comparison to other recent techniques and also show less time complexity.

4.
Comput Intell Neurosci ; 2022: 7897669, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35378808

RESUMO

Brain tumors are difficult to treat and cause substantial fatalities worldwide. Medical professionals visually analyze the images and mark out the tumor regions to identify brain tumors, which is time-consuming and prone to error. Researchers have proposed automated methods in recent years to detect brain tumors early. These approaches, however, encounter difficulties due to their low accuracy and large false-positive values. An efficient tumor identification and classification approach is required to extract robust features and perform accurate disease classification. This paper proposes a novel multiclass brain tumor classification method based on deep feature fusion. The MR images are preprocessed using min-max normalization, and then extensive data augmentation is applied to MR images to overcome the lack of data problem. The deep CNN features obtained from transfer learned architectures such as AlexNet, GoogLeNet, and ResNet18 are fused to build a single feature vector and then loaded into Support Vector Machine (SVM) and K-nearest neighbor (KNN) to predict the final output. The novel feature vector contains more information than the independent vectors, boosting the proposed method's classification performance. The proposed framework is trained and evaluated on 15,320 Magnetic Resonance Images (MRIs). The study shows that the fused feature vector performs better than the individual vectors. Moreover, the proposed technique performed better than the existing systems and achieved accuracy of 99.7%; hence, it can be used in clinical setup to classify brain tumors from MRIs.


Assuntos
Neoplasias Encefálicas , Aprendizado de Máquina , Encéfalo/patologia , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Máquina de Vetores de Suporte
5.
Diagnostics (Basel) ; 11(10)2021 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-34679554

RESUMO

The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques.

6.
Sensors (Basel) ; 21(16)2021 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-34450729

RESUMO

Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Retinopatia Diabética/diagnóstico por imagem , Humanos , Edema Macular/diagnóstico por imagem
7.
Diagnostics (Basel) ; 11(5)2021 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-33919358

RESUMO

A brain tumor is an abnormal growth in brain cells that causes damage to various blood vessels and nerves in the human body. An earlier and accurate diagnosis of the brain tumor is of foremost important to avoid future complications. Precise segmentation of brain tumors provides a basis for surgical planning and treatment to doctors. Manual detection using MRI images is computationally complex in cases where the survival of the patient is dependent on timely treatment, and the performance relies on domain expertise. Therefore, computerized detection of tumors is still a challenging task due to significant variations in their location and structure, i.e., irregular shapes and ambiguous boundaries. In this study, we propose a custom Mask Region-based Convolution neural network (Mask RCNN) with a densenet-41 backbone architecture that is trained via transfer learning for precise classification and segmentation of brain tumors. Our method is evaluated on two different benchmark datasets using various quantitative measures. Comparative results show that the custom Mask-RCNN can more precisely detect tumor locations using bounding boxes and return segmentation masks to provide exact tumor regions. Our proposed model achieved an accuracy of 96.3% and 98.34% for segmentation and classification respectively, demonstrating enhanced robustness compared to state-of-the-art approaches.

8.
J Food Drug Anal ; 26(2): 887-902, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29567261

RESUMO

The purpose of this study was to fabricate a triple-component nanocomposite system consisting of chitosan, polyethylene glycol (PEG), and drug for assessing the application of chitosan-PEG nanocomposites in drug delivery and also to assess the effect of different molecular weights of PEG on nanocomposite characteristics. The casting/solvent evaporation method was used to prepare chitosan-PEG nanocomposite films incorporating piroxicam-ß-cyclodextrin. In order to characterize the morphology and structure of nanocomposites, X-ray diffraction technique, scanning electron microscopy, thermogravimetric analysis, and Fourier transmission infrared spectroscopy were used. Drug content uniformity test, swelling studies, water content, erosion studies, dissolution studies, and anti-inflammatory activity were also performed. The permeation studies across rat skin were also performed on nanocomposite films using Franz diffusion cell. The release behavior of films was found to be sensitive to pH and ionic strength of release medium. The maximum swelling ratio and water content was found in HCl buffer pH 1.2 as compared to acetate buffer of pH 4.5 and phosphate buffer pH 7.4. The release rate constants obtained from kinetic modeling and flux values of ex vivo permeation studies showed that release of piroxicam-ß-cyclodextrin increased with an increase in concentration of PEG. The formulation F10 containing 75% concentration of PEG showed the highest swelling ratio (3.42±0.02) in HCl buffer pH 1.2, water content (47.89±1.53%) in HCl buffer pH 1.2, maximum cumulative drug permeation through rat skin (2405.15±10.97 µg/cm2) in phosphate buffer pH 7.4, and in vitro drug release (35.51±0.26%) in sequential pH change mediums, and showed a significantly (p<0.0001) higher anti-inflammatory effect (0.4 cm). It can be concluded from the results that film composition had a particular impact on drug release properties. The different molecular weights of PEG have a strong influence on swelling, drug release, and permeation rate. The developed films can act as successful drug delivery approach for localized drug delivery through the skin.


Assuntos
Sistemas de Liberação de Medicamentos/instrumentação , Nanocompostos/química , Administração Cutânea , Animais , Anti-Inflamatórios não Esteroides/administração & dosagem , Anti-Inflamatórios não Esteroides/química , Portadores de Fármacos/química , Sistemas de Liberação de Medicamentos/métodos , Masculino , Microscopia Eletrônica de Varredura , Piroxicam/administração & dosagem , Piroxicam/química , Polietilenoglicóis/química , Ratos , Ratos Sprague-Dawley , Difração de Raios X , beta-Ciclodextrinas/administração & dosagem , beta-Ciclodextrinas/química
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA