Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 103
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(3)2023 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-36772604

RESUMEN

Ankle injuries caused by the Anterior Talofibular Ligament (ATFL) are the most common type of injury. Thus, finding new ways to analyze these injuries through novel technologies is critical for assisting medical diagnosis and, as a result, reducing the subjectivity of this process. As a result, the purpose of this study is to compare the ability of specialists to diagnose lateral tibial tuberosity advancement (LTTA) injury using computer vision analysis on magnetic resonance imaging (MRI). The experiments were carried out on a database obtained from the Vue PACS-Carestream software, which contained 132 images of ATFL and normal (healthy) ankles. Because there were only a few images, image augmentation techniques was used to increase the number of images in the database. Following that, various feature extraction algorithms (GLCM, LBP, and HU invariant moments) and classifiers such as Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Random Forest (RF) were used. Based on the results from this analysis, for cases that lack clear morphologies, the method delivers a hit rate of 85.03% with an increase of 22% over the human expert-based analysis.


Asunto(s)
Traumatismos del Tobillo , Ligamentos Laterales del Tobillo , Humanos , Tobillo/diagnóstico por imagen , Articulación del Tobillo , Ligamentos Laterales del Tobillo/diagnóstico por imagen , Ligamentos Laterales del Tobillo/lesiones , Imagen por Resonancia Magnética/métodos , Traumatismos del Tobillo/diagnóstico por imagen , Computadores
2.
Sensors (Basel) ; 22(10)2022 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-35632297

RESUMEN

One of the most important strategies for preventative factory maintenance is anomaly detection without the need for dedicated sensors for each industrial unit. The implementation of sound-data-based anomaly detection is an unduly complicated process since factory-collected sound data are frequently corrupted and affected by ordinary production noises. The use of acoustic methods to detect the irregularities in systems has a long history. Unfortunately, limited reference to the implementation of the acoustic approach could be found in the failure detection of industrial machines. This paper presents a systematic review of acoustic approaches in mechanical failure detection in terms of recent implementations and structural extensions. The 52 articles are selected from IEEEXplore, Science Direct and Springer Link databases following the PRISMA methodology for performing systematic literature reviews. The study identifies the research gaps while considering the potential in responding to the challenges of the mechanical failure detection of industrial machines. The results of this study reveal that the use of acoustic emission is still dominant in the research community. In addition, based on the 52 selected articles, research that discusses failure detection in noisy conditions is still very limited and shows that it will still be a challenge in the future.


Asunto(s)
Acústica , Ruido
3.
Sensors (Basel) ; 22(9)2022 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-35591221

RESUMEN

The identification of human activities from videos is important for many applications. For such a task, three-dimensional (3D) depth images or image sequences (videos) can be used, which represent the positioning information of the objects in a 3D scene obtained from depth sensors. This paper presents a framework to create foreground-background masks from depth images for human body segmentation. The framework can be used to speed up the manual depth image annotation process with no semantics known beforehand and can apply segmentation using a performant algorithm while the user only adjusts the parameters, or corrects the automatic segmentation results, or gives it hints by drawing a boundary of the desired object. The approach has been tested using two different datasets with a human in a real-world closed environment. The solution has provided promising results in terms of reducing the manual segmentation time from the perspective of the processing time as well as the human input time.


Asunto(s)
Algoritmos , Cuerpo Humano , Computadores , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Semántica
4.
Sensors (Basel) ; 22(3)2022 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-35161486

RESUMEN

Alzheimer's disease (AD) is a neurodegenerative disease that affects brain cells, and mild cognitive impairment (MCI) has been defined as the early phase that describes the onset of AD. Early detection of MCI can be used to save patient brain cells from further damage and direct additional medical treatment to prevent its progression. Lately, the use of deep learning for the early identification of AD has generated a lot of interest. However, one of the limitations of such algorithms is their inability to identify changes in the functional connectivity in the functional brain network of patients with MCI. In this paper, we attempt to elucidate this issue with randomized concatenated deep features obtained from two pre-trained models, which simultaneously learn deep features from brain functional networks from magnetic resonance imaging (MRI) images. We experimented with ResNet18 and DenseNet201 to perform the task of AD multiclass classification. A gradient class activation map was used to mark the discriminating region of the image for the proposed model prediction. Accuracy, precision, and recall were used to assess the performance of the proposed system. The experimental analysis showed that the proposed model was able to achieve 98.86% accuracy, 98.94% precision, and 98.89% recall in multiclass classification. The findings indicate that advanced deep learning with MRI images can be used to classify and predict neurodegenerative brain diseases such as AD.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedades Neurodegenerativas , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Neuroimagen
5.
Sensors (Basel) ; 22(17)2022 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-36080813

RESUMEN

Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more complex than actually required. This leads to very long training times, since there are usually tens of millions of parameters to learn in this category of convolutional neural networks (CNNs). This article introduces a novel abridged VGG-16 and SegNet-inspired reflected architecture adapted for binary segmentation tasks. The architecture has 27 times fewer parameters than SegNet but yields 86% segmentation cross-intersection accuracy and 93% binary accuracy. The proposed architecture is evaluated on a large dataset of depth images collected using the Kinect device, achieving an accuracy of 99.25% in human body shape segmentation and 87% in gender recognition tasks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Semántica
6.
Sensors (Basel) ; 22(9)2022 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-35591146

RESUMEN

Pedestrian occurrences in images and videos must be accurately recognized in a number of applications that may improve the quality of human life. Radar can be used to identify pedestrians. When distinct portions of an object move in front of a radar, micro-Doppler signals are produced that may be utilized to identify the object. Using a deep-learning network and time-frequency analysis, we offer a method for classifying pedestrians and animals based on their micro-Doppler radar signature features. Based on these signatures, we employed a convolutional neural network (CNN) to recognize pedestrians and animals. The proposed approach was evaluated on the MAFAT Radar Challenge dataset. Encouraging results were obtained, with an AUC (Area Under Curve) value of 0.95 on the public test set and over 0.85 on the final (private) test set. The proposed DNN architecture, in contrast to more common shallow CNN architectures, is one of the first attempts to use such an approach in the domain of radar data. The use of the synthetic radar data, which greatly improved the final result, is the other novel aspect of our work.


Asunto(s)
Aprendizaje Profundo , Peatones , Animales , Humanos , Redes Neurales de la Computación , Radar , Ultrasonografía Doppler
7.
Sensors (Basel) ; 22(6)2022 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-35336395

RESUMEN

Current research endeavors in the application of artificial intelligence (AI) methods in the diagnosis of the COVID-19 disease has proven indispensable with very promising results. Despite these promising results, there are still limitations in real-time detection of COVID-19 using reverse transcription polymerase chain reaction (RT-PCR) test data, such as limited datasets, imbalance classes, a high misclassification rate of models, and the need for specialized research in identifying the best features and thus improving prediction rates. This study aims to investigate and apply the ensemble learning approach to develop prediction models for effective detection of COVID-19 using routine laboratory blood test results. Hence, an ensemble machine learning-based COVID-19 detection system is presented, aiming to aid clinicians to diagnose this virus effectively. The experiment was conducted using custom convolutional neural network (CNN) models as a first-stage classifier and 15 supervised machine learning algorithms as a second-stage classifier: K-Nearest Neighbors, Support Vector Machine (Linear and RBF), Naive Bayes, Decision Tree, Random Forest, MultiLayer Perceptron, AdaBoost, ExtraTrees, Logistic Regression, Linear and Quadratic Discriminant Analysis (LDA/QDA), Passive, Ridge, and Stochastic Gradient Descent Classifier. Our findings show that an ensemble learning model based on DNN and ExtraTrees achieved a mean accuracy of 99.28% and area under curve (AUC) of 99.4%, while AdaBoost gave a mean accuracy of 99.28% and AUC of 98.8% on the San Raffaele Hospital dataset, respectively. The comparison of the proposed COVID-19 detection approach with other state-of-the-art approaches using the same dataset shows that the proposed method outperforms several other COVID-19 diagnostics methods.


Asunto(s)
Inteligencia Artificial , COVID-19 , Teorema de Bayes , COVID-19/diagnóstico , Pruebas Hematológicas , Humanos , Aprendizaje Automático
8.
Sensors (Basel) ; 22(20)2022 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-36298227

RESUMEN

The development of smart applications has benefited greatly from the expansion of wireless technologies. A range of tasks are performed, and end devices are made capable of communicating with one another with the support of artificial intelligence technology. The Internet of Things (IoT) increases the efficiency of communication networks due to its low costs and simple management. However, it has been demonstrated that many systems still need an intelligent strategy for green computing. Establishing reliable connectivity in Green-IoT (G-IoT) networks is another key research challenge. With the integration of edge computing, this study provides a Sustainable Data-driven Secured optimization model (SDS-GIoT) that uses dynamic programming to provide enhanced learning capabilities. First, the proposed approach examines multi-variable functions and delivers graph-based link predictions to locate the optimal nodes for edge networks. Moreover, it identifies a sub-path in multistage to continue data transfer if a route is unavailable due to certain communication circumstances. Second, while applying security, edge computing provides offloading services that lower the amount of processing power needed for low-constraint nodes. Finally, the SDS-GIoT model is verified with various experiments, and the performance results demonstrate its significance for a sustainable environment against existing solutions.


Asunto(s)
Internet de las Cosas , Inteligencia Artificial , Tecnología Inalámbrica
9.
Sensors (Basel) ; 22(3)2022 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-35161843

RESUMEN

Tracking moving objects is one of the most promising yet the most challenging research areas pertaining to computer vision, pattern recognition and image processing. The challenges associated with object tracking range from problems pertaining to camera axis orientations to object occlusion. In addition, variations in remote scene environments add to the difficulties related to object tracking. All the mentioned challenges and problems pertaining to object tracking make the procedure computationally complex and time-consuming. In this paper, a stochastic gradient-based optimization technique has been used in conjunction with particle filters for object tracking. First, the object that needs to be tracked is detected using the Maximum Average Correlation Height (MACH) filter. The object of interest is detected based on the presence of a correlation peak and average similarity measure. The results of object detection are fed to the tracking routine. The gradient descent technique is employed for object tracking and is used to optimize the particle filters. The gradient descent technique allows particles to converge quickly, allowing less time for the object to be tracked. The results of the proposed algorithm are compared with similar state-of-the-art tracking algorithms on five datasets that include both artificial moving objects and humans to show that the gradient-based tracking algorithm provides better results, both in terms of accuracy and speed.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Percepción
10.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-35161552

RESUMEN

After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Probabilidad , Ultrasonografía Mamaria
11.
Medicina (Kaunas) ; 58(8)2022 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-36013557

RESUMEN

Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.


Asunto(s)
Neoplasias Encefálicas , Máquina de Vectores de Soporte , Inteligencia Artificial , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación
12.
Expert Syst ; 39(3): e12759, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34511689

RESUMEN

COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development.

13.
Sensors (Basel) ; 21(11)2021 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-34205120

RESUMEN

Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Algoritmos , Fondo de Ojo , Hemorragia , Humanos , Redes Neurales de la Computación , Retina
14.
Sensors (Basel) ; 21(12)2021 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-34201039

RESUMEN

Majority of current research focuses on a single static object reconstruction from a given pointcloud. However, the existing approaches are not applicable to real world applications such as dynamic and morphing scene reconstruction. To solve this, we propose a novel two-tiered deep neural network architecture, which is capable of reconstructing self-obstructed human-like morphing shapes from a depth frame in conjunction with cameras intrinsic parameters. The tests were performed using on custom dataset generated using a combination of AMASS and MoVi datasets. The proposed network achieved Jaccards' Index of 0.7907 for the first tier, which is used to extract region of interest from the point cloud. The second tier of the network has achieved Earth Mover's distance of 0.0256 and Chamfer distance of 0.276, indicating good experimental results. Further, subjective reconstruction results inspection shows strong predictive capabilities of the network, with the solution being able to reconstruct limb positions from very few object details.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Extremidades , Humanos
15.
Sensors (Basel) ; 21(11)2021 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-34073427

RESUMEN

With the majority of research, in relation to 3D object reconstruction, focusing on single static synthetic object reconstruction, there is a need for a method capable of reconstructing morphing objects in dynamic scenes without external influence. However, such research requires a time-consuming creation of real world object ground truths. To solve this, we propose a novel three-staged deep adversarial neural network architecture capable of denoising and refining real-world depth sensor input for full human body posture reconstruction. The proposed network has achieved Earth Mover and Chamfer distances of 0.059 and 0.079 on synthetic datasets, respectively, which indicates on-par experimental results with other approaches, in addition to the ability of reconstructing from maskless real world depth frames. Additional visual inspection to the reconstructed pointclouds has shown that the suggested approach manages to deal with the majority of the real world depth sensor noise, with the exception of large deformities to the depth field.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Recreación
16.
Sensors (Basel) ; 21(21)2021 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-34770595

RESUMEN

In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Animales , Inteligencia Artificial , Entropía , Luciérnagas , Humanos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
17.
Sensors (Basel) ; 21(11)2021 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-34205885

RESUMEN

Plant diseases can cause a considerable reduction in the quality and number of agricultural products. Guava, well known to be the tropics' apple, is one significant fruit cultivated in tropical regions. It is attacked by 177 pathogens, including 167 fungal and others such as bacterial, algal, and nematodes. In addition, postharvest diseases may cause crucial production loss. Due to minor variations in various guava disease symptoms, an expert opinion is required for disease analysis. Improper diagnosis may cause economic losses to farmers' improper use of pesticides. Automatic detection of diseases in plants once they emerge on the plants' leaves and fruit is required to maintain high crop fields. In this paper, an artificial intelligence (AI) driven framework is presented to detect and classify the most common guava plant diseases. The proposed framework employs the ΔE color difference image segmentation to segregate the areas infected by the disease. Furthermore, color (RGB, HSV) histogram and textural (LBP) features are applied to extract rich, informative feature vectors. The combination of color and textural features are used to identify and attain similar outcomes compared to individual channels, while disease recognition is performed by employing advanced machine-learning classifiers (Fine KNN, Complex Tree, Boosted Tree, Bagged Tree, Cubic SVM). The proposed framework is evaluated on a high-resolution (18 MP) image dataset of guava leaves and fruit. The best recognition results were obtained by Bagged Tree classifier on a set of RGB, HSV, and LBP features (99% accuracy in recognizing four guava fruit diseases (Canker, Mummification, Dot, and Rust) against healthy fruit). The proposed framework may help the farmers to avoid possible production loss by taking early precautions.


Asunto(s)
Psidium , Inteligencia Artificial , Frutas , Aprendizaje Automático , Enfermedades de las Plantas
18.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-34200216

RESUMEN

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Algoritmos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Humanos , Redes Neurales de la Computación , Reproducibilidad de los Resultados
19.
Entropy (Basel) ; 23(3)2021 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-33804035

RESUMEN

Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.

20.
Entropy (Basel) ; 23(8)2021 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-34441205

RESUMEN

Human activity recognition (HAR) plays a vital role in different real-world applications such as in tracking elderly activities for elderly care services, in assisted living environments, smart home interactions, healthcare monitoring applications, electronic games, and various human-computer interaction (HCI) applications, and is an essential part of the Internet of Healthcare Things (IoHT) services. However, the high dimensionality of the collected data from these applications has the largest influence on the quality of the HAR model. Therefore, in this paper, we propose an efficient HAR system using a lightweight feature selection (FS) method to enhance the HAR classification process. The developed FS method, called GBOGWO, aims to improve the performance of the Gradient-based optimizer (GBO) algorithm by using the operators of the grey wolf optimizer (GWO). First, GBOGWO is used to select the appropriate features; then, the support vector machine (SVM) is used to classify the activities. To assess the performance of GBOGWO, extensive experiments using well-known UCI-HAR and WISDM datasets were conducted. Overall outcomes show that GBOGWO improved the classification accuracy with an average accuracy of 98%.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA