Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
IEEE Trans Cybern ; PP2024 Jan 12.
Article in English | MEDLINE | ID: mdl-38215330

ABSTRACT

EA, such as the genetic algorithm (GA), offer an elegant way to handle combinatorial optimization problems (COPs). However, limited by expertise and resources, most users lack the capability to implement evolutionary algorithms (EAs) for solving COPs. An intuitive and promising solution is to outsource evolutionary operations to a cloud server, however, it poses privacy concerns. To this end, this article proposes a novel computing paradigm called evolutionary computation as a service (ECaaS), where a cloud server renders evolutionary computation services for users while ensuring their privacy. Following the concept of ECaaS, this article presents privacy-preserving genetic algorithm (PEGA), a privacy-preserving GA designed specifically for COPs. PEGA enables users, regardless of their domain expertise or resource availability, to outsource COPs to the cloud server that holds a competitive GA and approximates the optimal solution while safeguarding privacy. Notably, PEGA features the following characteristics. First, PEGA empowers users without domain expertise or sufficient resources to solve COPs effectively. Second, PEGA protects the privacy of users by preventing the leakage of optimization problem details. Third, PEGA performs comparably to the conventional GA when approximating the optimal solution. To realize its functionality, we implement PEGA falling in a twin-server architecture and evaluate it on two widely known COPs: 1) the traveling Salesman problem (TSP) and 2) the 0/1 knapsack problem (KP). Particularly, we utilize encryption cryptography to protect users' privacy and carefully design a suite of secure computing protocols to support evolutionary operators of GA on encrypted chromosomes. Privacy analysis demonstrates that PEGA successfully preserves the confidentiality of COP contents. Experimental evaluation results on several TSP datasets and KP datasets reveal that PEGA performs equivalently to the conventional GA in approximating the optimal solution.

2.
IEEE Trans Cybern ; PP2023 May 11.
Article in English | MEDLINE | ID: mdl-37167035

ABSTRACT

Binary hashing is an effective approach for content-based image retrieval, and learning binary codes with neural networks has attracted increasing attention in recent years. However, the training of hashing neural networks is difficult due to the binary constraint on hash codes. In addition, neural networks are easily affected by input data with small perturbations. Therefore, a sensitive binary hashing autoencoder (SBHA) is proposed to handle these challenges by introducing stochastic sensitivity for image retrieval. SBHA extracts meaningful features from original inputs and maps them onto a binary space to obtain binary hash codes directly. Different from ordinary autoencoders, SBHA is trained by minimizing the reconstruction error, the stochastic sensitive error, and the binary constraint error simultaneously. SBHA reduces output sensitivity to unseen samples with small perturbations from training samples by minimizing the stochastic sensitive error, which helps to learn more robust features. Moreover, SBHA is trained with a binary constraint and outputs binary codes directly. To tackle the difficulty of optimization with the binary constraint, we train the SBHA with alternating optimization. Experimental results on three benchmark datasets show that SBHA is competitive and significantly outperforms state-of-the-art methods for binary hashing.

3.
IEEE Trans Cybern ; 53(11): 7136-7149, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37015519

ABSTRACT

Centralized particle swarm optimization (PSO) does not fully exploit the potential of distributed or parallel computing and suffers from single-point-of-failure. Particularly, each particle in PSO comprises a potential solution (e.g., traveling route and neural network model parameters) which is essentially viewed as private data. Unfortunately, previously neither centralized nor distributed PSO algorithms fail to protect privacy effectively. Inspired by secure multiparty computation and multiagent system, this article proposes a privacy-preserving multiagent PSO algorithm (called PriMPSO) to protect each particle's data and enable private data sharing in a privacy-preserving manner. The goal of PriMPSO is to protect each particle's data in a distributed computing paradigm via existing PSO algorithms with competitive performance. Specifically, each particle is executed by an independent agent with its own data, and all agents jointly perform global optimization without sacrificing any particle's data. Thorough investigations show that selecting an exemplar from all particles and updating particles through the exemplar are critical operations for PSO algorithms. To this end, this article designs a privacy-preserving exemplar selection algorithm and a privacy-preserving triple computation protocol to select exemplars and update particles, respectively. Strict privacy analyses and extensive experiments on a benchmark and a realistic task confirm that PriMPSO not only protects particles' privacy but also has uniform convergence performance with the existing PSO algorithm in approximating an optimal solution.

4.
IEEE Trans Cybern ; 53(10): 6598-6611, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36446002

ABSTRACT

Surrogate-assisted evolutionary algorithms (EAs) have been proposed in recent years to solve data-driven optimization problems. Most existing surrogate-assisted EAs are for centralized optimization and do not take into account the challenges brought by the distribution of data at the edge of networks in the era of the Internet of Things. To this end, we propose edge-cloud co-EAs (ECCoEAs) to solve distributed data-driven optimization problems, where data are collected by edge servers. Specifically, we first propose a distributed framework of ECCoEAs, which consists of a communication mechanism, edge model management, and cloud model management. This communication mechanism is to avoid deadlock during the collaboration of edge servers and the cloud server. In edge model management, the edge models are trained based on local historical data and data composed of new solutions generated by co-evolutionary and their real evaluation values. In cloud model management, the black-box prediction functions received from edge models are used to find promising solutions to guide the edge model management. Moreover, two ECCoEAs are implemented, which proves the generality of the framework. To verify the performance of algorithms for distributed data-driven optimization problems, we design a novel benchmark test suite. The performance on the benchmarks and practical distributed clustering problems shows the effectiveness of ECCoEAs.

5.
Complex Intell Systems ; 9(2): 2189-2204, 2023.
Article in English | MEDLINE | ID: mdl-36405533

ABSTRACT

Mechanism-driven models based on transmission dynamics and statistic models driven by public health data are two main methods for simulating and predicting emerging infectious diseases. In this paper, we intend to combine these two methods to develop a more comprehensive model for the simulation and prediction of emerging infectious diseases. First, we combine a standard epidemic dynamic, the susceptible-exposed-infected-recovered (SEIR) model with population migration. This model can provide a biological spread process for emerging infectious diseases. Second, to determine suitable parameters for the model, we propose a data-driven approach, in which the public health data and population migration data are assembled. Moreover, an objective function is defined to minimize the error based on these data. Third, based on the proposed model, we further develop a swarm-optimizer-assisted simulation and prediction method, which contains two modules. In the first module, we use a level-based learning swarm optimizer to optimize the parameters required in the epidemic mechanism. In the second module, the optimized parameters are used to predicate the spread of emerging infectious diseases. Finally, various experiments are conducted to validate the effectiveness of the proposed model and method.

6.
Complex Intell Systems ; 8(5): 3989-4003, 2022.
Article in English | MEDLINE | ID: mdl-35284209

ABSTRACT

One important problem in financial optimization is to search for robust investment plans that can maximize return while minimizing risk. The market environment, namely the scenario of the problem in optimization, always affects the return and risk of an investment plan. Those financial optimization problems that the performance of the investment plans largely depends on the scenarios are defined as scenario-based optimization problems. This kind of uncertainty is called scenario-based uncertainty. The consideration of scenario-based uncertainty in multi-objective optimization problem is a largely under explored domain. In this paper, a nondominated sorting estimation of distribution algorithm with clustering (NSEDA-C) is proposed to deal with scenario-based robust financial problems. A robust group insurance portfolio problem is taken as an instance to study the features of scenario-based robust financial problems. A simplified simulation method is applied to measure the return while an estimation model is devised to measure the risk. Applications of the NSEDA-C on the group insurance portfolio problem for real-world insurance products have validated the effectiveness of the proposed algorithm.

7.
IEEE Trans Cybern ; 52(1): 51-64, 2022 Jan.
Article in English | MEDLINE | ID: mdl-32167922

ABSTRACT

Multimodal optimization problems have multiple satisfactory solutions to identify. Most of the existing works conduct the search based on the information of the current population, which can be inefficient. This article proposes a probabilistic niching evolutionary computation framework that guides the future search based on more sufficient historical information, in order to locate diverse and high-quality solutions. A binary space partition tree is built to structurally organize the space visiting information. Based on the tree, a probabilistic niching strategy is defined to reinforce exploration and exploitation by making full use of the structural historical information. The proposed framework is universal for incorporating various baseline niching algorithms. In this article, we integrate the proposed framework with two niching algorithms: 1) a distance-based differential evolution algorithm and 2) a topology-based particle swarm optimization algorithm. The two new algorithms are evaluated on 20 multimodal optimization test functions. The experimental results show that the proposed framework helps the algorithms obtain competitive performance. They outperform a number of state-of-the-art niching algorithms on most of the test functions.


Subject(s)
Algorithms
8.
IEEE Trans Cybern ; 52(3): 1960-1976, 2022 Mar.
Article in English | MEDLINE | ID: mdl-33296320

ABSTRACT

High-dimensional problems are ubiquitous in many fields, yet still remain challenging to be solved. To tackle such problems with high effectiveness and efficiency, this article proposes a simple yet efficient stochastic dominant learning swarm optimizer. Particularly, this optimizer not only compromises swarm diversity and convergence speed properly, but also consumes as little computing time and space as possible to locate the optima. In this optimizer, a particle is updated only when its two exemplars randomly selected from the current swarm are its dominators. In this way, each particle has an implicit probability to directly enter the next generation, making it possible to maintain high swarm diversity. Since each updated particle only learns from its dominators, good convergence is likely to be achieved. To alleviate the sensitivity of this optimizer to newly introduced parameters, an adaptive parameter adjustment strategy is further designed based on the evolutionary information of particles at the individual level. Finally, extensive experiments on two high dimensional benchmark sets substantiate that the devised optimizer achieves competitive or even better performance in terms of solution quality, convergence speed, scalability, and computational cost, compared to several state-of-the-art methods. In particular, experimental results show that the proposed optimizer performs excellently on partially separable problems, especially partially separable multimodal problems, which are very common in real-world applications. In addition, the application to feature selection problems further demonstrates the effectiveness of this optimizer in tackling real-world problems.

9.
IEEE Trans Cybern ; 51(3): 1651-1665, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31380779

ABSTRACT

The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful evolutionary algorithm for single-objective real-valued optimization. However, the time and space complexity may preclude its use in high-dimensional decision space. Recent studies suggest that putting sparse or low-rank constraints on the structure of the covariance matrix can improve the efficiency of CMA-ES in handling large-scale problems. Following this idea, this paper proposes a search direction adaptation evolution strategy (SDA-ES) which achieves linear time and space complexity. SDA-ES models the covariance matrix with an identity matrix and multiple search directions, and uses a heuristic to update the search directions in a way similar to the principal component analysis. We also generalize the traditional 1/5th success rule to adapt the mutation strength which exhibits the derandomization property. Numerical comparisons with nine state-of-the-art algorithms are carried out on 31 test problems. The experimental results have shown that SDA-ES is invariant under search-space rotational transformations, and is scalable with respect to the number of variables. It also achieves competitive performance on generic black-box problems, demonstrating its effectiveness in keeping a good tradeoff between solution quality and computational efficiency.

10.
IEEE Trans Cybern ; 51(12): 6105-6118, 2021 Dec.
Article in English | MEDLINE | ID: mdl-32031961

ABSTRACT

The resource-constrained project scheduling problem (RCPSP) is a basic problem in project management. The net present value (NPV) of discounted cash flow is used as a criterion to evaluate the financial aspects of RCPSP in many studies. But while most existing studies focused on only the contractor's NPV, this article addresses a practical extension of RCPSP, called the payment scheduling negotiation problem (PSNP), which considers both the interests of the contractor and the client. To maximize NPVs of both sides and achieve a win-win solution, these two participants negotiate together to determine an activity schedule and a payment plan for the project. The challenges arise in three aspects: 1) the client's NPV and the contractor's NPV are two conflicting objectives; 2) both participants have special preferences in decision making; and 3) the RCPSP is nondeterministic polynomial-time hard (NP-Hard). To overcome these challenges, this article proposes a new approach with the following features. First, the problem is reformulated as a biobjective optimization problem with preferences. Second, to address the different preferences of the client and the contractor, a strategy of multilevel region interest is presented. Third, this strategy is integrated in the nondominated sorting genetic algorithm II (NSGA-II) to solve the PSNP efficiently. In the experiment, the proposed algorithm is compared with both the double-level optimization approach and the multiobjective optimization approach. The experimental results validate that the proposed method can focus on searching in the region of interest (ROI) and provide more satisfactory solutions.


Subject(s)
Algorithms , Negotiating , Humans
11.
IEEE Trans Cybern ; 51(8): 4134-4147, 2021 Aug.
Article in English | MEDLINE | ID: mdl-31613788

ABSTRACT

In many practical applications, it is crucial to perform automatic data clustering without knowing the number of clusters in advance. The evolutionary computation paradigm is good at dealing with this task, but the existing algorithms encounter several deficiencies, such as the encoding redundancy and the cross-dimension learning error. In this article, we propose a novel elastic differential evolution algorithm to solve automatic data clustering. Unlike traditional methods, the proposed algorithm considers each clustering layout as a whole and adapts the cluster number and cluster centroids inherently through the variable-length encoding and the evolution operators. The encoding scheme contains no redundancy. To enable the individuals of different lengths to exchange information properly, we develop a subspace crossover and a two-phase mutation operator. The operators employ the basic method of differential evolution and, in addition, they consider the spatial information of cluster layouts to generate offspring solutions. Particularly, each dimension of the parameter vector interacts with its correlated dimensions, which not only adapts the cluster number but also avoids the cross-dimension learning error. The experimental results show that our algorithm outperforms the state-of-the-art algorithms that it is able to identify the correct number of clusters and obtain a good cluster validation value.

12.
IEEE Trans Cybern ; 51(7): 3752-3766, 2021 Jul.
Article in English | MEDLINE | ID: mdl-32175884

ABSTRACT

The control of virus spreading over complex networks with a limited budget has attracted much attention but remains challenging. This article aims at addressing the combinatorial, discrete resource allocation problems (RAPs) in virus spreading control. To meet the challenges of increasing network scales and improve the solving efficiency, an evolutionary divide-and-conquer algorithm is proposed, namely, a coevolutionary algorithm with network-community-based decomposition (NCD-CEA). It is characterized by the community-based dividing technique and cooperative coevolution conquering thought. First, to reduce the time complexity, NCD-CEA divides a network into multiple communities by a modified community detection method such that the most relevant variables in the solution space are clustered together. The problem and the global swarm are subsequently decomposed into subproblems and subswarms with low-dimensional embeddings. Second, to obtain high-quality solutions, an alternative evolutionary approach is designed by promoting the evolution of subswarms and the global swarm, in turn, with subsolutions evaluated by local fitness functions and global solutions evaluated by a global fitness function. Extensive experiments on different networks show that NCD-CEA has a competitive performance in solving RAPs. This article advances toward controlling virus spreading over large-scale networks.

13.
IEEE Trans Cybern ; 51(11): 5559-5572, 2021 Nov.
Article in English | MEDLINE | ID: mdl-32915756

ABSTRACT

Evacuation path optimization (EPO) is a crucial problem in crowd and disaster management. With the consideration of dynamic evacuee velocity, the EPO problem becomes nondeterministic polynomial-time hard (NP-Hard). Furthermore, since not only one single evacuation path but multiple mutually restricted paths should be found, the crowd evacuation problem becomes even challenging in both solution spatial encoding and optimal solution searching. To address the above challenges, this article puts forward an ant colony evacuation planner (ACEP) with a novel solution construction strategy and an incremental flow assignment (IFA) method. First, different from the traditional ant algorithms, where each ant builds a complete solution independently, ACEP uses the entire colony of ants to simulate the behavior of the crowd during evacuation. In this way, the colony of ants works cooperatively to find a set of evacuation paths simultaneously and thus multiple evacuation paths can be found effectively. Second, in order to reduce the execution time of ACEP, an IFA method is introduced, in which fractions of evacuees are assigned step by step, to imitate the group-based evacuation process in the real world so that the efficiency of ACEP can be further improved. Numerical experiments are conducted on a set of networks with different sizes. The experimental results demonstrate that ACEP is promising.


Subject(s)
Algorithms , Crowding
14.
IEEE Trans Cybern ; 50(7): 3393-3408, 2020 Jul.
Article in English | MEDLINE | ID: mdl-30969936

ABSTRACT

Large-scale optimization with high dimensionality and high computational cost becomes ubiquitous nowadays. To tackle such challenging problems efficiently, devising distributed evolutionary computation algorithms is imperative. To this end, this paper proposes a distributed swarm optimizer based on a special master-slave model. Specifically, in this distributed optimizer, the master is mainly responsible for communication with slaves, while each slave iterates a swarm to traverse the solution space. An asynchronous and adaptive communication strategy based on the request-response mechanism is especially devised to let the slaves communicate with the master efficiently. Particularly, the communication between the master and each slave is adaptively triggered during the iteration. To aid the slaves to search the space efficiently, an elite-guided learning strategy is especially designed via utilizing elite particles in the current swarm and historically best solutions found by different slaves to guide the update of particles. Together, this distributed optimizer asynchronously iterates multiple swarms to collaboratively seek the optimum in parallel. Extensive experiments on a widely used large-scale benchmark set substantiate that the distributed optimizer could: 1) achieve competitive effectiveness in terms of solution quality as compared to the state-of-the-art large-scale methods; 2) accelerate the execution of the algorithm in comparison with the sequential one and obtain almost linear speedup as the number of cores increases; and 3) preserve a good scalability to solve higher dimensional problems.

15.
IEEE Trans Cybern ; 50(9): 4053-4065, 2020 Sep.
Article in English | MEDLINE | ID: mdl-31295135

ABSTRACT

The rapid development of online social networks not only enables prompt and convenient dissemination of desirable information but also incurs fast and wide propagation of undesirable information. A common way to control the spread of pollutants is to block some nodes, but such a strategy may affect the service quality of a social network and leads to a high control cost if too many nodes are blocked. This paper considers the node selection problem as a biobjective optimization problem to find a subset of nodes to be blocked so that the effect of the control is maximized while the cost of the control is minimized. To solve this problem, we design an ant colony optimization algorithm with an adaptive dimension size selection under the multiobjective evolutionary algorithm framework based on decomposition (MOEA/D-ADACO). The proposed algorithm divides the biobjective problem into a set of single-objective subproblems and each ant takes charge of optimizing one subproblem. Moreover, two types of pheromone and heuristic information are incorporated into MOEA/D-ADACO, that is, pheromone and heuristic information of dimension size selection and that of node selection. While constructing solutions, the ants first determine the dimension size according to the former type of pheromone and heuristic information. Then, the ants select a specific number of nodes to build solutions according to the latter type of pheromone and heuristic information. Experiments conducted on a set of real-world online social networks confirm that the proposed biobjective optimization model and the developed MOEA/D-ADACO are promising for the pollutant spreading control.


Subject(s)
Information Dissemination , Models, Biological , Social Networking , Algorithms , Computer Heuristics , Environmental Pollutants , Internet , Models, Statistical , Pheromones
16.
IEEE Trans Cybern ; 48(7): 2139-2153, 2018 Jul.
Article in English | MEDLINE | ID: mdl-28792909

ABSTRACT

This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

17.
IEEE Trans Cybern ; 48(7): 2166-2180, 2018 Jul.
Article in English | MEDLINE | ID: mdl-28767384

ABSTRACT

Nowadays, large-scale optimization problems are ubiquitous in many research fields. To deal with such problems efficiently, this paper proposes a distributed differential evolution with adaptive mergence and split (DDE-AMS) on subpopulations. The novel mergence and split operators are designed to make full use of limited population resource, which is important for large-scale optimization. They are adaptively performed based on the performance of the subpopulations. During the evolution, once a subpopulation finds a promising region, the current worst performing subpopulation will merge into it. If the merged subpopulation could not continuously provide competitive solutions, it will be split in half. In this way, the number of subpopulations is adaptively adjusted and better performing subpopulations obtain more individuals. Thus, population resource can be adaptively arranged for subpopulations during the evolution. Moreover, the proposed algorithm is implemented with a parallel master-slave manner. Extensive experiments are conducted on 20 widely used large-scale benchmark functions. Experimental results demonstrate that the proposed DDE-AMS could achieve competitive or even better performance compared with several state-of-the-art algorithms. The effects of DDE-AMS components, adaptive behavior, scalability, and parameter sensitivity are also studied. Finally, we investigate the speedup ratios of DDE-AMS with different computation resources.

18.
IEEE Trans Cybern ; 47(9): 2924-2937, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28186918

ABSTRACT

The popular performance profiles and data profiles for benchmarking deterministic optimization algorithms are extended to benchmark stochastic algorithms for global optimization problems. A general confidence interval is employed to replace the significance test, which is popular in traditional benchmarking methods but suffering more and more criticisms. Through computing confidence bounds of the general confidence interval and visualizing them with performance profiles and (or) data profiles, our benchmarking method can be used to compare stochastic optimization algorithms by graphs. Compared with traditional benchmarking methods, our method is synthetic statistically and therefore is suitable for large sets of benchmark problems. Compared with some sample-mean-based benchmarking methods, e.g., the method adopted in black-box-optimization-benchmarking workshop/competition, our method considers not only sample means but also sample variances. The most important property of our method is that it is a distribution-free method, i.e., it does not depend on any distribution assumption of the population. This makes it a promising benchmarking method for stochastic optimization algorithms. Some examples are provided to illustrate how to use our method to compare stochastic optimization algorithms.

19.
IEEE Trans Cybern ; 47(9): 2896-2910, 2017 09.
Article in English | MEDLINE | ID: mdl-28113797

ABSTRACT

Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified.

20.
IEEE Trans Cybern ; 45(9): 1798-810, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25314717

ABSTRACT

Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs.

SELECTION OF CITATIONS
SEARCH DETAIL
...