Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
PeerJ Comput Sci ; 10: e1925, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660206

RESUMO

This article introduces a recognition system for handwritten text in the Pashto language, representing the first attempt to establish a baseline system using the Pashto Handwritten Text Imagebase (PHTI) dataset. Initially, the PHTI dataset underwent pre-processed to eliminate unwanted characters, subsequently, the dataset was divided into training 70%, validation 15%, and test sets 15%. The proposed recognition system is based on multi-dimensional long short-term memory (MD-LSTM) networks. A comprehensive empirical analysis was conducted to determine the optimal parameters for the proposed MD-LSTM architecture; Counter experiments were used to evaluate the performance of the proposed system comparing with the state-of-the-art models on the PHTI dataset. The novelty of our proposed model, compared to other state of the art models, lies in its hidden layer size (i.e., 10, 20, 80) and its Tanh layer size (i.e., 20, 40). The system achieves a Character Error Rate (CER) of 20.77% as a baseline on the test set. The top 20 confusions are reported to check the performance and limitations of the proposed model. The results highlight complications and future perspective of the Pashto language towards the digital transition.

2.
Biochem Cell Biol ; 2024 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-38306631

RESUMO

Currently used lung disease screening tools are expensive in terms of money and time. Therefore, chest radiograph images (CRIs) are employed for prompt and accurate COVID-19 identification. Recently, many researchers have applied Deep learning (DL) based models to detect COVID-19 automatically. However, their model could have been more computationally expensive and less robust, i.e., its performance degrades when evaluated on other datasets. This study proposes a trustworthy, robust, and lightweight network (ChestCovidNet) that can detect COVID-19 by examining various CRIs datasets. The ChestCovidNet model has only 11 learned layers, eight convolutional (Conv) layers, and three fully connected (FC) layers. The framework employs both the Conv and group Conv layers, Leaky Relu activation function, shufflenet unit, Conv kernels of 3×3 and 1×1 to extract features at different scales, and two normalization procedures that are cross-channel normalization and batch normalization. We used 9013 CRIs for training whereas 3863 CRIs for testing the proposed ChestCovidNet approach. Furthermore, we compared the classification results of the proposed framework with hybrid methods in which we employed DL frameworks for feature extraction and support vector machines (SVM) for classification. The study's findings demonstrated that the embedded low-power ChestCovidNet model worked well and achieved a classification accuracy of 98.12% and recall, F1-score, and precision of 95.75%.

3.
Front Plant Sci ; 13: 1064854, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507379

RESUMO

Bacteriosis is one of the most prevalent and deadly infections that affect peach crops globally. Timely detection of Bacteriosis disease is essential for lowering pesticide use and preventing crop loss. It takes time and effort to distinguish and detect Bacteriosis or a short hole in a peach leaf. In this paper, we proposed a novel LightWeight (WLNet) Convolutional Neural Network (CNN) model based on Visual Geometry Group (VGG-19) for detecting and classifying images into Bacteriosis and healthy images. Profound knowledge of the proposed model is utilized to detect Bacteriosis in peach leaf images. First, a dataset is developed which consists of 10000 images: 4500 are Bacteriosis and 5500 are healthy images. Second, images are preprocessed using different steps to prepare them for the identification of Bacteriosis and healthy leaves. These preprocessing steps include image resizing, noise removal, image enhancement, background removal, and augmentation techniques, which enhance the performance of leaves classification and help to achieve a decent result. Finally, the proposed LWNet model is trained for leaf classification. The proposed model is compared with four different CNN models: LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The proposed model obtains an accuracy of 99%, which is higher than LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The achieved results indicate that the proposed model is more effective for the detection of Bacteriosis in peach leaf images, in comparison with the existing models.

4.
Sensors (Basel) ; 22(23)2022 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-36501813

RESUMO

Gait-based gender classification is a challenging task since people may walk in different directions with varying speed, gait style, and occluded joints. The majority of research studies in the literature focused on gender-specific joints, while there is less attention on the comparison of all of a body's joints. To consider all of the joints, it is essential to determine a person's gender based on their gait using a Kinect sensor. This paper proposes a logistic-regression-based machine learning model using whole body joints for gender classification. The proposed method consists of different phases including gait feature extraction based on three dimensional (3D) positions, feature selection, and classification of human gender. The Kinect sensor is used to extract 3D features of different joints. Different statistical tools such as Cronbach's alpha, correlation, t-test, and ANOVA techniques are exploited to select significant joints. The Coronbach's alpha technique yields an average result of 99.74%, which indicates the reliability of joints. Similarly, the correlation results indicate that there is significant difference between male and female joints during gait. t-test and ANOVA approaches demonstrate that all twenty joints are statistically significant for gender classification, because the p-value for each joint is zero and less than 1%. Finally, classification is performed based on the selected features using binary logistic regression model. A total of hundred (100) volunteers participated in the experiments in real scenario. The suggested method successfully classifies gender based on 3D features recorded in real-time using machine learning classifier with an accuracy of 98.0% using all body joints. The proposed method outperformed the existing systems which mostly rely on digital images.


Assuntos
Algoritmos , Transtornos Neurológicos da Marcha , Humanos , Masculino , Feminino , Reprodutibilidade dos Testes , Marcha , Aprendizado de Máquina , Articulações
5.
Front Plant Sci ; 13: 1095547, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36589071

RESUMO

Plants are the primary source of food for world's population. Diseases in plants can cause yield loss, which can be mitigated by continual monitoring. Monitoring plant diseases manually is difficult and prone to errors. Using computer vision and artificial intelligence (AI) for the early identification of plant illnesses can prevent the negative consequences of diseases at the very beginning and overcome the limitations of continuous manual monitoring. The research focuses on the development of an automatic system capable of performing the segmentation of leaf lesions and the detection of disease without requiring human intervention. To get lesion region segmentation, we propose a context-aware 3D Convolutional Neural Network (CNN) model based on CANet architecture that considers the ambiguity of plant lesion placement in the plant leaf image subregions. A Deep CNN is employed to recognize the subtype of leaf lesion using the segmented lesion area. Finally, the plant's survival is predicted using a hybrid method combining CNN and Linear Regression. To evaluate the efficacy and effectiveness of our proposed plant disease detection scheme and survival prediction, we utilized the Plant Village Benchmark Dataset, which is composed of several photos of plant leaves affected by a certain disease. Using the DICE and IoU matrices, the segmentation model performance for plant leaf lesion segmentation is evaluated. The proposed lesion segmentation model achieved an average accuracy of 92% with an IoU of 90%. In comparison, the lesion subtype recognition model achieves accuracies of 91.11%, 93.01 and 99.04 for pepper, potato and tomato plants. The higher accuracy of the proposed model indicates that it can be utilized for real-time disease detection in unmanned aerial vehicles and offline to offer crop health updates and reduce the risk of low yield.

6.
Sensors (Basel) ; 20(2)2020 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-31935996

RESUMO

Human face image analysis is an active research area within computer vision. In this paper we propose a framework for face image analysis, addressing three challenging problems of race, age, and gender recognition through face parsing. We manually labeled face images for training an end-to-end face parsing model through Deep Convolutional Neural Networks. The deep learning-based segmentation model parses a face image into seven dense classes. We use the probabilistic classification method and created probability maps for each face class. The probability maps are used as feature descriptors. We trained another Convolutional Neural Network model by extracting features from probability maps of the corresponding class for each demographic task (race, age, and gender). We perform extensive experiments on state-of-the-art datasets and obtained much better results as compared to previous results.


Assuntos
Algoritmos , Aprendizado Profundo , Face/anatomia & histologia , Redes Neurais de Computação , Fatores Etários , Bases de Dados como Assunto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Grupos Raciais
7.
Entropy (Basel) ; 21(7)2019 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-33267361

RESUMO

Accurate face segmentation strongly benefits the human face image analysis problem. In this paper we propose a unified framework for face image analysis through end-to-end semantic face segmentation. The proposed framework contains a set of stack components for face understanding, which includes head pose estimation, age classification, and gender recognition. A manually labeled face data-set is used for training the Conditional Random Fields (CRFs) based segmentation model. A multi-class face segmentation framework developed through CRFs segments a facial image into six parts. The probabilistic classification strategy is used, and probability maps are generated for each class. The probability maps are used as features descriptors and a Random Decision Forest (RDF) classifier is modeled for each task (head pose, age, and gender). We assess the performance of the proposed framework on several data-sets and report better results as compared to the previously reported results.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA