Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Biochem Cell Biol ; 2024 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-38306631

RESUMEN

Currently used lung disease screening tools are expensive in terms of money and time. Therefore, chest radiograph images (CRIs) are employed for prompt and accurate COVID-19 identification. Recently, many researchers have applied Deep learning (DL) based models to detect COVID-19 automatically. However, their model could have been more computationally expensive and less robust, i.e., its performance degrades when evaluated on other datasets. This study proposes a trustworthy, robust, and lightweight network (ChestCovidNet) that can detect COVID-19 by examining various CRIs datasets. The ChestCovidNet model has only 11 learned layers, eight convolutional (Conv) layers, and three fully connected (FC) layers. The framework employs both the Conv and group Conv layers, Leaky Relu activation function, shufflenet unit, Conv kernels of 3×3 and 1×1 to extract features at different scales, and two normalization procedures that are cross-channel normalization and batch normalization. We used 9013 CRIs for training whereas 3863 CRIs for testing the proposed ChestCovidNet approach. Furthermore, we compared the classification results of the proposed framework with hybrid methods in which we employed DL frameworks for feature extraction and support vector machines (SVM) for classification. The study's findings demonstrated that the embedded low-power ChestCovidNet model worked well and achieved a classification accuracy of 98.12% and recall, F1-score, and precision of 95.75%.

2.
Sensors (Basel) ; 22(23)2022 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-36501813

RESUMEN

Gait-based gender classification is a challenging task since people may walk in different directions with varying speed, gait style, and occluded joints. The majority of research studies in the literature focused on gender-specific joints, while there is less attention on the comparison of all of a body's joints. To consider all of the joints, it is essential to determine a person's gender based on their gait using a Kinect sensor. This paper proposes a logistic-regression-based machine learning model using whole body joints for gender classification. The proposed method consists of different phases including gait feature extraction based on three dimensional (3D) positions, feature selection, and classification of human gender. The Kinect sensor is used to extract 3D features of different joints. Different statistical tools such as Cronbach's alpha, correlation, t-test, and ANOVA techniques are exploited to select significant joints. The Coronbach's alpha technique yields an average result of 99.74%, which indicates the reliability of joints. Similarly, the correlation results indicate that there is significant difference between male and female joints during gait. t-test and ANOVA approaches demonstrate that all twenty joints are statistically significant for gender classification, because the p-value for each joint is zero and less than 1%. Finally, classification is performed based on the selected features using binary logistic regression model. A total of hundred (100) volunteers participated in the experiments in real scenario. The suggested method successfully classifies gender based on 3D features recorded in real-time using machine learning classifier with an accuracy of 98.0% using all body joints. The proposed method outperformed the existing systems which mostly rely on digital images.


Asunto(s)
Algoritmos , Trastornos Neurológicos de la Marcha , Humanos , Masculino , Femenino , Reproducibilidad de los Resultados , Marcha , Aprendizaje Automático , Articulaciones
3.
Sensors (Basel) ; 20(2)2020 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-31935996

RESUMEN

Human face image analysis is an active research area within computer vision. In this paper we propose a framework for face image analysis, addressing three challenging problems of race, age, and gender recognition through face parsing. We manually labeled face images for training an end-to-end face parsing model through Deep Convolutional Neural Networks. The deep learning-based segmentation model parses a face image into seven dense classes. We use the probabilistic classification method and created probability maps for each face class. The probability maps are used as feature descriptors. We trained another Convolutional Neural Network model by extracting features from probability maps of the corresponding class for each demographic task (race, age, and gender). We perform extensive experiments on state-of-the-art datasets and obtained much better results as compared to previous results.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Cara/anatomía & histología , Redes Neurales de la Computación , Factores de Edad , Bases de Datos como Asunto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Grupos Raciales
4.
Entropy (Basel) ; 21(7)2019 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-33267361

RESUMEN

Accurate face segmentation strongly benefits the human face image analysis problem. In this paper we propose a unified framework for face image analysis through end-to-end semantic face segmentation. The proposed framework contains a set of stack components for face understanding, which includes head pose estimation, age classification, and gender recognition. A manually labeled face data-set is used for training the Conditional Random Fields (CRFs) based segmentation model. A multi-class face segmentation framework developed through CRFs segments a facial image into six parts. The probabilistic classification strategy is used, and probability maps are generated for each class. The probability maps are used as features descriptors and a Random Decision Forest (RDF) classifier is modeled for each task (head pose, age, and gender). We assess the performance of the proposed framework on several data-sets and report better results as compared to the previously reported results.

5.
PeerJ Comput Sci ; 10: e1925, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660206

RESUMEN

This article introduces a recognition system for handwritten text in the Pashto language, representing the first attempt to establish a baseline system using the Pashto Handwritten Text Imagebase (PHTI) dataset. Initially, the PHTI dataset underwent pre-processed to eliminate unwanted characters, subsequently, the dataset was divided into training 70%, validation 15%, and test sets 15%. The proposed recognition system is based on multi-dimensional long short-term memory (MD-LSTM) networks. A comprehensive empirical analysis was conducted to determine the optimal parameters for the proposed MD-LSTM architecture; Counter experiments were used to evaluate the performance of the proposed system comparing with the state-of-the-art models on the PHTI dataset. The novelty of our proposed model, compared to other state of the art models, lies in its hidden layer size (i.e., 10, 20, 80) and its Tanh layer size (i.e., 20, 40). The system achieves a Character Error Rate (CER) of 20.77% as a baseline on the test set. The top 20 confusions are reported to check the performance and limitations of the proposed model. The results highlight complications and future perspective of the Pashto language towards the digital transition.

6.
PeerJ Comput Sci ; 10: e2163, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39314685

RESUMEN

Generating poetry using machine and deep learning techniques has been a challenging and exciting topic of research in recent years. It has significance in natural language processing and computational linguistics. This study introduces an innovative approach to generate high-quality Pashto poetry by leveraging two pre-trained transformer models, LaMini-Cerebras-590M and bloomz-560m. The models were trained on an extensive new and quality Pashto poetry dataset to learn the underlying complex patterns and structures. The trained models are then used to generate new Pashto poetry by providing them with a seed text or prompt. To evaluate the quality of the generated poetry, we conducted both subjective and objective evaluations, including human evaluation. The experimental results demonstrate that the proposed approach can generate Pashto poetry that is comparable in quality to human-generated poetry. The study provides a valuable contribution to the field of Pashto language and poetry generation and has potential applications in natural language processing and computational linguistics.

7.
PeerJ Comput Sci ; 10: e2089, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145223

RESUMEN

Layout analysis is the main component of a typical Document Image Analysis (DIA) system and plays an important role in pre-processing. However, regarding the Pashto language, the document images have not been explored so far. This research, for the first time, examines Pashto text along with graphics and proposes a deep learning-based classifier that can detect Pashto text and graphics per document. Another notable contribution of this research is the creation of a real dataset, which contains more than 1,000 images of the Pashto documents captured by a camera. For this dataset, we applied the convolution neural network (CNN) following a deep learning technique. Our intended method is based on the development of the advanced and classical variant of Faster R-CNN called Single-Shot Detector (SSD). The evaluation was performed by examining the 300 images from the test set. Through this way, we achieved a mean average precision (mAP) of 84.90%.

8.
Front Plant Sci ; 13: 1064854, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36507379

RESUMEN

Bacteriosis is one of the most prevalent and deadly infections that affect peach crops globally. Timely detection of Bacteriosis disease is essential for lowering pesticide use and preventing crop loss. It takes time and effort to distinguish and detect Bacteriosis or a short hole in a peach leaf. In this paper, we proposed a novel LightWeight (WLNet) Convolutional Neural Network (CNN) model based on Visual Geometry Group (VGG-19) for detecting and classifying images into Bacteriosis and healthy images. Profound knowledge of the proposed model is utilized to detect Bacteriosis in peach leaf images. First, a dataset is developed which consists of 10000 images: 4500 are Bacteriosis and 5500 are healthy images. Second, images are preprocessed using different steps to prepare them for the identification of Bacteriosis and healthy leaves. These preprocessing steps include image resizing, noise removal, image enhancement, background removal, and augmentation techniques, which enhance the performance of leaves classification and help to achieve a decent result. Finally, the proposed LWNet model is trained for leaf classification. The proposed model is compared with four different CNN models: LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The proposed model obtains an accuracy of 99%, which is higher than LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The achieved results indicate that the proposed model is more effective for the detection of Bacteriosis in peach leaf images, in comparison with the existing models.

9.
Front Plant Sci ; 13: 1095547, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36589071

RESUMEN

Plants are the primary source of food for world's population. Diseases in plants can cause yield loss, which can be mitigated by continual monitoring. Monitoring plant diseases manually is difficult and prone to errors. Using computer vision and artificial intelligence (AI) for the early identification of plant illnesses can prevent the negative consequences of diseases at the very beginning and overcome the limitations of continuous manual monitoring. The research focuses on the development of an automatic system capable of performing the segmentation of leaf lesions and the detection of disease without requiring human intervention. To get lesion region segmentation, we propose a context-aware 3D Convolutional Neural Network (CNN) model based on CANet architecture that considers the ambiguity of plant lesion placement in the plant leaf image subregions. A Deep CNN is employed to recognize the subtype of leaf lesion using the segmented lesion area. Finally, the plant's survival is predicted using a hybrid method combining CNN and Linear Regression. To evaluate the efficacy and effectiveness of our proposed plant disease detection scheme and survival prediction, we utilized the Plant Village Benchmark Dataset, which is composed of several photos of plant leaves affected by a certain disease. Using the DICE and IoU matrices, the segmentation model performance for plant leaf lesion segmentation is evaluated. The proposed lesion segmentation model achieved an average accuracy of 92% with an IoU of 90%. In comparison, the lesion subtype recognition model achieves accuracies of 91.11%, 93.01 and 99.04 for pepper, potato and tomato plants. The higher accuracy of the proposed model indicates that it can be utilized for real-time disease detection in unmanned aerial vehicles and offline to offer crop health updates and reduce the risk of low yield.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA