Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(22)2023 Nov 09.
Article in English | MEDLINE | ID: mdl-38005455

ABSTRACT

Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques. Our proposed solution is a stacked encoded model, combining artificial intelligence (AI) with the Internet of Things (IoT), which refines feature extraction and classification to overcome these challenges. We leverage a lightweight backbone model for preliminary feature extraction and use stacked autoencoders to further refine these features. Our approach harnesses the scalability of big data, showing notable improvement in accuracy, precision, recall, F1-score, and complexity analysis. Our model's effectiveness is demonstrated through testing on the ArSL2018 benchmark dataset, showcasing superior performance compared to state-of-the-art approaches. Additional validation through an ablation study with pre-trained convolutional neural network (CNN) models affirms our model's efficacy across all evaluation metrics. Our work paves the way for the sustainable development of high-performing, IoT-based sign-language-recognition applications.


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Machine Learning , Sign Language , Neural Networks, Computer
2.
Sensors (Basel) ; 22(23)2022 Dec 01.
Article in English | MEDLINE | ID: mdl-36502084

ABSTRACT

The study of automated video surveillance systems study using computer vision techniques is a hot research topic and has been deployed in many real-world CCTV environments. The main focus of the current systems is higher accuracy, while the assistance of surveillance experts in effective data analysis and instant decision making using efficient computer vision algorithms need researchers' attentions. In this research, to the best of our knowledge, we are the first to introduce a process control technique: control charts for surveillance video data analysis. The control charts concept is merged with a novel deep learning-based violence detection framework. Different from the existing methods, the proposed technique considers the importance of spatial information, as well as temporal representations of the input video data, to detect human violence. The spatial information are fused with the temporal dimension of the deep learning model using a multi-scale strategy to ensure that the temporal information are properly assisted by the spatial representations at multi-levels. The proposed frameworks' results are kept in the history-maintaining module of the control charts to validate the level of risks involved in the live input surveillance video. The detailed experimental results over the existing datasets and the real-world video data demonstrate that the proposed approach is a prominent solution towards automated surveillance with the pre- and post-analyses of violent events.


Subject(s)
Algorithms , Humans
3.
Diagnostics (Basel) ; 12(12)2022 Nov 28.
Article in English | MEDLINE | ID: mdl-36552983

ABSTRACT

The abnormal growth of cells in the skin causes two types of tumor: benign and malignant. Various methods, such as imaging and biopsies, are used by oncologists to assess the presence of skin cancer, but these are time-consuming and require extra human effort. However, some automated methods have been developed by researchers based on hand-crafted feature extraction from skin images. Nevertheless, these methods may fail to detect skin cancers at an early stage if they are tested on unseen data. Therefore, in this study, a novel and robust skin cancer detection model was proposed based on features fusion. First, our proposed model pre-processed the images using a GF filter to remove the noise. Second, the features were manually extracted by employing local binary patterns (LBP), and Inception V3 for automatic feature extraction. Aside from this, an Adam optimizer was utilized for the adjustments of learning rate. In the end, LSTM network was utilized on fused features for the classification of skin cancer into malignant and benign. Our proposed system employs the benefits of both ML- and DL-based algorithms. We utilized the skin lesion DermIS dataset, which is available on the Kaggle website and consists of 1000 images, out of which 500 belong to the benign class and 500 to the malignant class. The proposed methodology attained 99.4% accuracy, 98.7% precision, 98.66% recall, and a 98% F-score. We compared the performance of our features fusion-based method with existing segmentation-based and DL-based techniques. Additionally, we cross-validated the performance of our proposed model using 1000 images from International Skin Image Collection (ISIC), attaining 98.4% detection accuracy. The results show that our method provides significant results compared to existing techniques and outperforms them.

4.
Sensors (Basel) ; 22(24)2022 Dec 12.
Article in English | MEDLINE | ID: mdl-36560117

ABSTRACT

Insect pests and crop diseases are considered the major problems for agricultural production, due to the severity and extent of their occurrence causing significant crop losses. To increase agricultural production, it is significant to protect the crop from harmful pests which is possible via soft computing techniques. The soft computing techniques are based on traditional machine and deep learning-based approaches. However, in the traditional methods, the selection of manual feature extraction mechanisms is ineffective, inefficient, and time-consuming, while deep learning techniques are computationally expensive and require a large amount of training data. In this paper, we propose an efficient pest detection method that accurately localized the pests and classify them according to their desired class label. In the proposed work, we modify the YOLOv5s model in several ways such as extending the cross stage partial network (CSP) module, improving the select kernel (SK) in the attention module, and modifying the multiscale feature extraction mechanism, which plays a significant role in the detection and classification of small and large sizes of pest in an image. To validate the model performance, we develop a medium-scale pest detection dataset that includes the five most harmful pests for agriculture products that are ants, grasshopper, palm weevils, shield bugs, and wasps. To check the model's effectiveness, we compare the results of the proposed model with several variations of the YOLOv5 model, where the proposed model achieved the best results in the experiments. Thus, the proposed model has the potential to be applied in real-world applications and further motivate research on pest detection to increase agriculture production.


Subject(s)
Ants , Grasshoppers , Animals , Benchmarking , Agriculture/methods , Insecta
5.
Sensors (Basel) ; 22(11)2022 May 25.
Article in English | MEDLINE | ID: mdl-35684627

ABSTRACT

Skin Cancer (SC) is considered the deadliest disease in the world, killing thousands of people every year. Early SC detection can increase the survival rate for patients up to 70%, hence it is highly recommended that regular head-to-toe skin examinations are conducted to determine whether there are any signs or symptoms of SC. The use of Machine Learning (ML)-based methods is having a significant impact on the classification and detection of SC diseases. However, there are certain challenges associated with the accurate classification of these diseases such as a lower detection accuracy, poor generalization of the models, and an insufficient amount of labeled data for training. To address these challenges, in this work we developed a two-tier framework for the accurate classification of SC. During the first stage of the framework, we applied different methods for data augmentation to increase the number of image samples for effective training. As part of the second tier of the framework, taking into consideration the promising performance of the Medical Vision Transformer (MVT) in the analysis of medical images, we developed an MVT-based classification model for SC. This MVT splits the input image into image patches and then feeds these patches to the transformer in a sequence structure, like word embedding. Finally, Multi-Layer Perceptron (MLP) is used to classify the input image into the corresponding class. Based on the experimental results achieved on the Human Against Machine (HAM10000) datasets, we concluded that the proposed MVT-based model achieves better results than current state-of-the-art techniques for SC classification.


Subject(s)
Neural Networks, Computer , Skin Neoplasms , Early Detection of Cancer/methods , Humans , Machine Learning , Skin Neoplasms/diagnosis
6.
Biomolecules ; 13(1)2022 12 29.
Article in English | MEDLINE | ID: mdl-36671456

ABSTRACT

Enhancers are sequences with short motifs that exhibit high positional variability and free scattering properties. Identification of these noncoding DNA fragments and their strength are extremely important because they play a key role in controlling gene regulation on a cellular basis. The identification of enhancers is more complex than that of other factors in the genome because they are freely scattered, and their location varies widely. In recent years, bioinformatics tools have enabled significant improvement in identifying this biological difficulty. Cell line-specific screening is not possible using these existing computational methods based solely on DNA sequences. DNA segment chromatin accessibility may provide useful information about its potential function in regulation, thereby identifying regulatory elements based on its chromatin accessibility. In chromatin, the entanglement structure allows positions far apart in the sequence to encounter each other, regardless of their proximity to the gene to be acted upon. Thus, identifying enhancers and assessing their strength is difficult and time-consuming. The goal of our work was to overcome these limitations by presenting a convolutional neural network (CNN) with attention-gated recurrent units (AttGRU) based on Deep Learning. It used a CNN and one-hot coding to build models, primarily to identify enhancers and secondarily to classify their strength. To test the performance of the proposed model, parallels were drawn between enhancer-CNNAttGRU and existing state-of-the-art methods to enable comparisons. The proposed model performed the best for predicting stage one and stage two enhancer sequences, as well as their strengths, in a cross-species analysis, achieving best accuracy values of 87.39% and 84.46%, respectively. Overall, the results showed that the proposed model provided comparable results to state-of-the-art models, highlighting its usefulness.


Subject(s)
Chromatin , Enhancer Elements, Genetic , Enhancer Elements, Genetic/genetics , Chromatin/genetics , Computational Biology/methods , DNA/genetics , DNA/chemistry , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...