Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters











Publication year range
1.
Heliyon ; 10(11): e31629, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38845929

ABSTRACT

This paper introduces a new metaheuristic technique known as the Greater Cane Rat Algorithm (GCRA) for addressing optimization problems. The optimization process of GCRA is inspired by the intelligent foraging behaviors of greater cane rats during and off mating season. Being highly nocturnal, they are intelligible enough to leave trails as they forage through reeds and grass. Such trails would subsequently lead to food and water sources and shelter. The exploration phase is achieved when they leave the different shelters scattered around their territory to forage and leave trails. It is presumed that the alpha male maintains knowledge about these routes, and as a result, other rats modify their location according to this information. Also, the males are aware of the breeding season and separate themselves from the group. The assumption is that once the group is separated during this season, the foraging activities are concentrated within areas of abundant food sources, which aids the exploitation. Hence, the smart foraging paths and behaviors during the mating season are mathematically represented to realize the design of the GCR algorithm and carry out the optimization tasks. The performance of GCRA is tested using twenty-two classical benchmark functions, ten CEC 2020 complex functions, and the CEC 2011 real-world continuous benchmark problems. To further test the performance of the proposed algorithm, six classic problems in the engineering domain were used. Furthermore, a thorough analysis of computational and convergence results is presented to shed light on the efficacy and stability levels of GCRA. The statistical significance of the results is compared with ten state-of-the-art algorithms using Friedman's and Wilcoxon's signed rank tests. These findings show that GCRA produced optimal or nearly optimal solutions and evaded the trap of local minima, distinguishing it from the rival optimization algorithms employed to tackle similar problems. The GCRA optimizer source code is publicly available at: https://www.mathworks.com/matlabcentral/fileexchange/165241-greater-cane-rat-algorithm-gcra.

2.
PLoS One ; 18(3): e0282812, 2023.
Article in English | MEDLINE | ID: mdl-36930670

ABSTRACT

Feature selection problem represents the field of study that requires approximate algorithms to identify discriminative and optimally combined features. The evaluation and suitability of these selected features are often analyzed using classifiers. These features are locked with data increasingly being generated from different sources such as social media, surveillance systems, network applications, and medical records. The high dimensionality of these datasets often impairs the quality of the optimal combination of these features selected. The use of the binary optimization method has been proposed in the literature to address this challenge. However, the underlying deficiency of the single binary optimizer is transferred to the quality of the features selected. Though hybrid methods have been proposed, most still suffer from the inherited design limitation of the single combined methods. To address this, we proposed a novel hybrid binary optimization capable of effectively selecting features from increasingly high-dimensional datasets. The approach used in this study designed a sub-population selective mechanism that dynamically assigns individuals to a 2-level optimization process. The level-1 method first mutates items in the population and then reassigns them to a level-2 optimizer. The selective mechanism determines what sub-population is assigned for the level-2 optimizer based on the exploration and exploitation phase of the level-1 optimizer. In addition, we designed nested transfer (NT) functions and investigated the influence of the function on the level-1 optimizer. The binary Ebola optimization search algorithm (BEOSA) is applied for the level-1 mutation, while the simulated annealing (SA) and firefly (FFA) algorithms are investigated for the level-2 optimizer. The outcome of these are the HBEOSA-SA and HBEOSA-FFA, which are then investigated on the NT, and their corresponding variants HBEOSA-SA-NT and HBEOSA-FFA-NT with no NT applied. The hybrid methods were experimentally tested over high-dimensional datasets to address the challenge of feature selection. A comparative analysis was done on the methods to obtain performance variability with the low-dimensional datasets. Results obtained for classification accuracy for large, medium, and small-scale datasets are 0.995 using HBEOSA-FFA, 0.967 using HBEOSA-FFA-NT, and 0.953 using HBEOSA-FFA, respectively. Fitness and cost values relative to large, medium, and small-scale datasets are 0.066 and 0.934 using HBEOSA-FFA, 0.068 and 0.932 using HBEOSA-FFA, with 0.222 and 0.970 using HBEOSA-SA-NT, respectively. Findings from the study indicate that the HBEOSA-SA, HBEOSA-FFA, HBEOSA-SA-NT and HBEOSA-FFA-NT outperformed the BEOSA.


Subject(s)
Hemorrhagic Fever, Ebola , Humans , Hemorrhagic Fever, Ebola/genetics , Algorithms
3.
J Bionic Eng ; 20(3): 1263-1295, 2023.
Article in English | MEDLINE | ID: mdl-36530517

ABSTRACT

This paper proposes a modified version of the Dwarf Mongoose Optimization Algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. The proposed IDMO was used to solve the classical and CEC 2020 benchmark functions and 12 continuous/discrete engineering optimization problems. The performance of the IDMO, using different performance metrics and statistical analysis, is compared with the DMO and eight other existing algorithms. In most cases, the results show that solutions achieved by the IDMO are better than those obtained by the existing algorithms.

4.
Arch Comput Methods Eng ; 30(1): 391-426, 2023.
Article in English | MEDLINE | ID: mdl-36059575

ABSTRACT

The Moth flame optimization (MFO) algorithm belongs to the swarm intelligence family and is applied to solve complex real-world optimization problems in numerous domains. MFO and its variants are easy to understand and simple to operate. However, these algorithms have successfully solved optimization problems in different areas such as power and energy systems, engineering design, economic dispatch, image processing, and medical applications. A comprehensive review of MFO variants is presented in this context, including the classic version, binary types, modified versions, hybrid versions, multi-objective versions, and application part of the MFO algorithm in various sectors. Finally, the evaluation of the MFO algorithm is presented to measure its performance compared to other algorithms. The main focus of this literature is to present a survey and review the MFO and its applications. Also, the concluding remark section discusses some possible future research directions of the MFO algorithm and its variants.

5.
Arch Comput Methods Eng ; 30(2): 985-1040, 2023.
Article in English | MEDLINE | ID: mdl-36373091

ABSTRACT

Differential evolution (DE) is one of the highly acknowledged population-based optimization algorithms due to its simplicity, user-friendliness, resilience, and capacity to solve problems. DE has grown steadily since its beginnings due to its ability to solve various issues in academics and industry. Different mutation techniques and parameter choices influence DE's exploration and exploitation capabilities, motivating academics to continue working on DE. This survey aims to depict DE's recent developments concerning parameter adaptations, parameter settings and mutation strategies, hybridizations, and multi-objective variants in the last twelve years. It also summarizes the problems solved in image processing by DE and its variants.

6.
PLoS One ; 17(11): e0275346, 2022.
Article in English | MEDLINE | ID: mdl-36322574

ABSTRACT

This paper proposes an improvement to the dwarf mongoose optimization (DMO) algorithm called the advanced dwarf mongoose optimization (ADMO) algorithm. The improvement goal is to solve the low convergence rate limitation of the DMO. This situation arises when the initial solutions are close to the optimal global solution; the subsequent value of the alpha must be small for the DMO to converge towards a better solution. The proposed improvement incorporates other social behavior of the dwarf mongoose, namely, the predation and mound protection and the reproductive and group splitting behavior to enhance the exploration and exploitation ability of the DMO. The ADMO also modifies the lifestyle of the alpha and subordinate group and the foraging and seminomadic behavior of the DMO. The proposed ADMO was used to solve the congress on evolutionary computation (CEC) 2011 and 2017 benchmark functions, consisting of 30 classical and hybrid composite problems and 22 real-world optimization problems. The performance of the ADMO, using different performance metrics and statistical analysis, is compared with the DMO and seven other existing algorithms. In most cases, the results show that solutions achieved by the ADMO are better than the solution obtained by the existing algorithms.


Subject(s)
Benchmarking , Herpestidae , Animals , Algorithms , Biological Evolution , Social Behavior
7.
PLoS One ; 17(10): e0274850, 2022.
Article in English | MEDLINE | ID: mdl-36201524

ABSTRACT

Selecting appropriate feature subsets is a vital task in machine learning. Its main goal is to remove noisy, irrelevant, and redundant feature subsets that could negatively impact the learning model's accuracy and improve classification performance without information loss. Therefore, more advanced optimization methods have been employed to locate the optimal subset of features. This paper presents a binary version of the dwarf mongoose optimization called the BDMO algorithm to solve the high-dimensional feature selection problem. The effectiveness of this approach was validated using 18 high-dimensional datasets from the Arizona State University feature selection repository and compared the efficacy of the BDMO with other well-known feature selection techniques in the literature. The results show that the BDMO outperforms other methods producing the least average fitness value in 14 out of 18 datasets which means that it achieved 77.77% on the overall best fitness values. The result also shows BDMO demonstrating stability by returning the least standard deviation (SD) value in 13 of 18 datasets (72.22%). Furthermore, the study achieved higher validation accuracy in 15 of the 18 datasets (83.33%) over other methods. The proposed approach also yielded the highest validation accuracy attainable in the COIL20 and Leukemia datasets which vividly portray the superiority of the BDMO.


Subject(s)
Herpestidae , Algorithms , Animals , Arizona , Humans , Machine Learning
8.
Neural Comput Appl ; 34(22): 19751-19790, 2022.
Article in English | MEDLINE | ID: mdl-36060097

ABSTRACT

Selecting relevant feature subsets is vital in machine learning, and multiclass feature selection is harder to perform since most classifications are binary. The feature selection problem aims at reducing the feature set dimension while maintaining the performance model accuracy. Datasets can be classified using various methods. Nevertheless, metaheuristic algorithms attract substantial attention to solving different problems in optimization. For this reason, this paper presents a systematic survey of literature for solving multiclass feature selection problems utilizing metaheuristic algorithms that can assist classifiers selects optima or near optima features faster and more accurately. Metaheuristic algorithms have also been presented in four primary behavior-based categories, i.e., evolutionary-based, swarm-intelligence-based, physics-based, and human-based, even though some literature works presented more categorization. Further, lists of metaheuristic algorithms were introduced in the categories mentioned. In finding the solution to issues related to multiclass feature selection, only articles on metaheuristic algorithms used for multiclass feature selection problems from the year 2000 to 2022 were reviewed about their different categories and detailed descriptions. We considered some application areas for some of the metaheuristic algorithms applied for multiclass feature selection with their variations. Popular multiclass classifiers for feature selection were also examined. Moreover, we also presented the challenges of metaheuristic algorithms for feature selection, and we identified gaps for further research studies.

9.
Sci Rep ; 12(1): 14945, 2022 Sep 02.
Article in English | MEDLINE | ID: mdl-36056062

ABSTRACT

The dwarf mongoose optimization (DMO) algorithm developed in 2022 was applied to solve continuous mechanical engineering design problems with a considerable balance of the exploration and exploitation phases as a metaheuristic approach. Still, the DMO is restricted in its exploitation phase, somewhat hindering the algorithm's optimal performance. In this paper, we proposed a new hybrid method called the BDMSAO, which combines the binary variants of the DMO (or BDMO) and simulated annealing (SA) algorithm. In the modelling and implementation of the hybrid BDMSAO algorithm, the BDMO is employed and used as the global search method and the simulated annealing (SA) as the local search component to enhance the limited exploitative mechanism of the BDMO. The new hybrid algorithm was evaluated using eighteen (18) UCI machine learning datasets of low and medium dimensions. The BDMSAO was also tested using three high-dimensional medical datasets to assess its robustness. The results showed the efficacy of the BDMSAO in solving challenging feature selection problems on varying datasets dimensions and its outperformance over ten other methods in the study. Specifically, the BDMSAO achieved an overall result of 61.11% in producing the highest classification accuracy possible and getting 100% accuracy on 9 of 18 datasets. It also yielded the maximum accuracy obtainable on the three high-dimensional datasets utilized while achieving competitive performance regarding the number of features selected.


Subject(s)
Herpestidae , Algorithms , Animals , Machine Learning , Problem Solving
10.
PLoS One ; 16(8): e0255703, 2021.
Article in English | MEDLINE | ID: mdl-34428219

ABSTRACT

The distributive power of the arithmetic operators: multiplication, division, addition, and subtraction, gives the arithmetic optimization algorithm (AOA) its unique ability to find the global optimum for optimization problems used to test its performance. Several other mathematical operators exist with the same or better distributive properties, which can be exploited to enhance the performance of the newly proposed AOA. In this paper, we propose an improved version of the AOA called nAOA algorithm, which uses the high-density values that the natural logarithm and exponential operators can generate, to enhance the exploratory ability of the AOA. The addition and subtraction operators carry out the exploitation. The candidate solutions are initialized using the beta distribution, and the random variables and adaptations used in the algorithm have beta distribution. We test the performance of the proposed nAOA with 30 benchmark functions (20 classical and 10 composite test functions) and three engineering design benchmarks. The performance of nAOA is compared with the original AOA and nine other state-of-the-art algorithms. The nAOA shows efficient performance for the benchmark functions and was second only to GWO for the welded beam design (WBD), compression spring design (CSD), and pressure vessel design (PVD).


Subject(s)
Algorithms , Engineering/methods , Equipment Design , Problem Solving , Benchmarking , Computer Simulation , Pressure , Welding/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL