Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Heliyon ; 9(4): e15378, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37101631

RESUMEN

With the whirlwind evolution of technology, the quantity of stored data within datasets is rapidly expanding. As a result, extracting crucial and relevant information from said datasets is a gruelling task. Feature selection is a critical preprocessing task for machine learning to reduce the excess data in a set. This research presents a novel quasi-reflection learning arithmetic optimization algorithm - firefly search, an enhanced version of the original arithmetic optimization algorithm. Quasi-reflection learning mechanism was implemented for enhancement of population diversity, while firefly algorithm metaheuristics were used to improve the exploitation abilities of the original arithmetic optimization algorithm. The aim of this wrapper-based method is to tackle a specific classification problem by selecting an optimal feature subset. The proposed algorithm is tested and compared with various well-known methods on ten unconstrained benchmark functions, then on twenty-one standard datasets gathered from the University of California, Irvine Repository and Arizona State University. Additionally, the proposed approach is applied to the Corona disease dataset. The experimental results verify the improvements of the presented method and their statistical significance.

2.
PLoS One ; 17(10): e0275727, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36215218

RESUMEN

The fast-growing quantity of information hinders the process of machine learning, making it computationally costly and with substandard results. Feature selection is a pre-processing method for obtaining the optimal subset of features in a data set. Optimization algorithms struggle to decrease the dimensionality while retaining accuracy in high-dimensional data set. This article proposes a novel chaotic opposition fruit fly optimization algorithm, an improved variation of the original fruit fly algorithm, advanced and adapted for binary optimization problems. The proposed algorithm is tested on ten unconstrained benchmark functions and evaluated on twenty-one standard datasets taken from the Univesity of California, Irvine repository and Arizona State University. Further, the presented algorithm is assessed on a coronavirus disease dataset, as well. The proposed method is then compared with several well-known feature selection algorithms on the same datasets. The results prove that the presented algorithm predominantly outperform other algorithms in selecting the most relevant features by decreasing the number of utilized features and improving classification accuracy.


Asunto(s)
COVID-19 , Algoritmos , Animales , Arizona , Drosophila , Aprendizaje Automático
3.
Sensors (Basel) ; 22(5)2022 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-35270856

RESUMEN

We live in a period when smart devices gather a large amount of data from a variety of sensors and it is often the case that decisions are taken based on them in a more or less autonomous manner. Still, many of the inputs do not prove to be essential in the decision-making process; hence, it is of utmost importance to find the means of eliminating the noise and concentrating on the most influential attributes. In this sense, we put forward a method based on the swarm intelligence paradigm for extracting the most important features from several datasets. The thematic of this paper is a novel implementation of an algorithm from the swarm intelligence branch of the machine learning domain for improving feature selection. The combination of machine learning with the metaheuristic approaches has recently created a new branch of artificial intelligence called learnheuristics. This approach benefits both from the capability of feature selection to find the solutions that most impact on accuracy and performance, as well as the well known characteristic of swarm intelligence algorithms to efficiently comb through a large search space of solutions. The latter is used as a wrapper method in feature selection and the improvements are significant. In this paper, a modified version of the salp swarm algorithm for feature selection is proposed. This solution is verified by 21 datasets with the classification model of K-nearest neighborhoods. Furthermore, the performance of the algorithm is compared to the best algorithms with the same test setup resulting in better number of features and classification accuracy for the proposed solution. Therefore, the proposed method tackles feature selection and demonstrates its success with many benchmark datasets.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...