Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Evol Comput ; 29(1): 157-186, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32567957

RESUMO

An objective normalization strategy is essential in any evolutionary multiobjective or many-objective optimization (EMO or EMaO) algorithm, due to the distance calculations between objective vectors required to compute diversity and convergence of population members. For the decomposition-based EMO/EMaO algorithms involving the Penalty Boundary Intersection (PBI) metric, normalization is an important matter due to the computation of two distance metrics. In this article, we make a theoretical analysis of the effect of instabilities in the normalization process on the performance of PBI-based MOEA/D and a proposed PBI-based NSGA-III procedure. Although the effect is well recognized in the literature, few theoretical studies have been done so far to understand its true nature and the choice of a suitable penalty parameter value for an arbitrary problem. The developed theoretical results have been corroborated with extensive experimental results on three to 15-objective convex and non-convex instances of DTLZ and WFG problems. The article, makes important theoretical conclusions on PBI-based decomposition algorithms derived from the study.


Assuntos
Algoritmos , Evolução Biológica , Modelos Teóricos
2.
Evol Comput ; 28(3): 339-378, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31120774

RESUMO

Multiobjective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multiobjective optimization problems. In fact, many real-world multiobjective problems contain a number of constraints. To promote research on constrained multiobjective optimization, we first propose a problem classification scheme with three primary types of difficulty, which reflect various types of challenges presented by real-world optimization problems, in order to characterize the constraint functions in constrained multiobjective optimization problems (CMOPs). These are feasibility-hardness, convergence-hardness, and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable CMOPs (DAS-CMOPs, or DAS-CMaOPs when the number of objectives is greater than three) with three types of parameterized constraint functions developed to capture the three proposed types of difficulty. In fact, the combination of the three primary constraint functions with different parameters allows the construction of a large variety of CMOPs, with difficulty that can be defined by a triplet, with each of its parameters specifying the level of one of the types of primary difficulty. Furthermore, the number of objectives in this toolkit can be scaled beyond three. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs and nine CMaOPs, to be called DAS-CMOP1-9 and DAS-CMaOP1-9, respectively. To evaluate the proposed test problems, two popular CMOEAs-MOEA/D-CDP (MOEA/D with constraint dominance principle) and NSGA-II-CDP (NSGA-II with constraint dominance principle) and two popular constrained many-objective evolutionary algorithms (CMaOEAs)-C-MOEA/DD and C-NSGA-III-are used to compare performance on DAS-CMOP1-9 and DAS-CMaOP1-9 with a variety of difficulty triplets, respectively. The experimental results reveal that mechanisms in MOEA/D-CDP may be more effective in solving convergence-hard DAS-CMOPs, while mechanisms of NSGA-II-CDP may be more effective in solving DAS-CMOPs with simultaneous diversity-, feasibility-, and convergence-hardness. Mechanisms in C-NSGA-III may be more effective in solving feasibility-hard CMaOPs, while mechanisms of C-MOEA/DD may be more effective in solving CMaOPs with convergence-hardness. In addition, none of them can solve these problems efficiently, which stimulates us to continue to develop new CMOEAs and CMaOEAs to solve the suggested DAS-CMOPs and DAS-CMaOPs.


Assuntos
Algoritmos , Evolução Biológica , Biologia Computacional , Simulação por Computador
3.
Evol Comput ; 25(3): 439-471, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-27070282

RESUMO

During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.


Assuntos
Algoritmos , Evolução Biológica , Biologia Computacional/métodos , Modelos Biológicos , Humanos
4.
Evol Comput ; 22(3): 439-77, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24364674

RESUMO

In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of 12 problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The code related to the paper may be accessed from the website http://bilevel.org .


Assuntos
Algoritmos , Metodologias Computacionais , Modelos Teóricos , Resolução de Problemas , Simulação por Computador
5.
Evol Comput ; 20(1): 27-62, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-21591888

RESUMO

In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.


Assuntos
Algoritmos , Modelos Teóricos , Ferramenta de Busca , Software , Evolução Biológica , Simulação por Computador
6.
IEEE Trans Cybern ; PP2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35737627

RESUMO

Black-box artificial intelligence (AI) induction methods such as deep reinforcement learning (DRL) are increasingly being used to find optimal policies for a given control task. Although policies represented using a black-box AI are capable of efficiently executing the underlying control task and achieving optimal closed-loop performance-controlling the agent from the initial time step until the successful termination of an episode, the developed control rules are often complex and neither interpretable nor explainable. In this article, we use a recently proposed nonlinear decision-tree (NLDT) approach to find a hierarchical set of control rules in an attempt to maximize the open-loop performance for approximating and explaining the pretrained black-box DRL (oracle) agent using the labeled state-action dataset. Recent advances in nonlinear optimization approaches using evolutionary computation facilitate finding a hierarchical set of nonlinear control rules as a function of state variables using a computationally fast bilevel optimization procedure at each node of the proposed NLDT. In addition, we propose a reoptimization procedure for enhancing the closed-loop performance of an already derived NLDT. We evaluate our proposed methodologies (open-and closed-loop NLDTs) on different control problems having multiple discrete actions. In all these problems, our proposed approach is able to find relatively simple and interpretable rules involving one to four nonlinear terms per rule, while simultaneously achieving on par closed-loop performance when compared to a trained black-box DRL agent. A postprocessing approach for simplifying the NLDT is also suggested. The obtained results are inspiring as they suggest the replacement of complicated black-box DRL policies involving thousands of parameters (making them noninterpretable) with relatively simple interpretable policies. The results are encouraging and motivating to pursue further applications of proposed approach in solving more complex control tasks.

7.
IEEE Trans Cybern ; 51(11): 5573-5584, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33259315

RESUMO

For supervised classification problems involving design, control, and other practical purposes, users are not only interested in finding a highly accurate classifier but they also demand that the obtained classifier be easily interpretable. While the definition of interpretability of a classifier can vary from case to case, here, by a humanly interpretable classifier, we restrict it to be expressed in simplistic mathematical terms. As a novel approach, we represent a classifier as an assembly of simple mathematical rules using a nonlinear decision tree (NLDT). Each conditional (nonterminal) node of the tree represents a nonlinear mathematical rule (split-rule) involving features in order to partition the dataset in the given conditional node into two nonoverlapping subsets. This partitioning is intended to minimize the impurity of the resulting child nodes. By restricting the structure of the split-rule at each conditional node and depth of the decision tree, the interpretability of the classifier is ensured. The nonlinear split-rule at a given conditional node is obtained using an evolutionary bilevel optimization algorithm, in which while the upper level focuses on arriving at an interpretable structure of the split-rule, the lower level achieves the most appropriate weights (coefficients) of individual constituents of the rule to minimize the net impurity of two resulting child nodes. The performance of the proposed algorithm is demonstrated on a number of controlled test problems, existing benchmark problems, and industrial problems. Results on 2-500 feature problems are encouraging and open up further scopes of applying the proposed approach to more challenging and complex classification tasks.

8.
IEEE Trans Pattern Anal Mach Intell ; 43(9): 2971-2989, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33465025

RESUMO

Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture Transfer (NAT) to overcome this limitation. NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives. To realize this goal we learn task-specific supernets from which specialized subnets can be sampled without any additional training. The key to our approach is an integrated online transfer learning and many-objective evolutionary search procedure. A pre-trained supernet is iteratively adapted while simultaneously searching for task-specific subnets. We demonstrate the efficacy of NAT on 11 benchmark image classification tasks ranging from large-scale multi-class to small-scale fine-grained datasets. In all cases, including ImageNet, NATNets improve upon the state-of-the-art under mobile settings ( ≤ 600M Multiply-Adds). Surprisingly, small-scale fine-grained datasets benefit the most from NAT. At the same time, the architecture search and transfer is orders of magnitude more efficient than existing NAS methods. Overall, experimental evaluation indicates that, across diverse image classification tasks and computational objectives, NAT is an appreciably more effective alternative to conventional transfer learning of fine-tuning weights of an existing network architecture learned on standard datasets. Code is available at https://github.com/human-analysis/neural-architecture-transfer.

9.
Evol Comput ; 18(3): 403-49, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20560758

RESUMO

Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.


Assuntos
Algoritmos , Evolução Biológica , Modelos Estatísticos , Software
10.
IEEE Trans Cybern ; 49(3): 859-869, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29994360

RESUMO

Nondominated sorting is a key operation used in multiobjective evolutionary algorithms (MOEA). Worst case time complexity of this algorithm is O(MN2) , where N is the number of solutions and M is the number of objectives. For stochastic algorithms like MOEAs, it is important to devise an algorithm which has better average case performance. In this paper, we propose a new algorithm that makes use of faster scalar sorting algorithm to perform nondominated sorting. It finds partial orders of each solution from all objectives and use these orders to skip unnecessary solution comparisons. We also propose a specific order of objectives that reduces objective comparisons. The proposed method introduces a weighted binary search over the fronts when the rank of a solution is determined. It further reduces total computational effort by a large factor when there is large number of fronts. We prove that the worst case complexity can be reduced to Θ(MNCmaxlog2 (F+1)) , where the number of fronts is F and the maximum number of solutions per front is Cmax ; however, in general cases, our worst case complexity is still O(MN2) . Our best case time complexity is O(MNlog N) . We also achieve the best case complexity O(MNlog N+N2) , when all solutions are in a single front. This method is compared with other state-of-the-art algorithms-efficient nondomination level update, deductive sort, corner sort, efficient nondominated sort and divide-and-conquer sort-in four different datasets. Experimental results show that our method, namely, bounded best order sort, is computationally more efficient than all other competing algorithms.

11.
IEEE Trans Cybern ; 47(9): 2838-2849, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27845681

RESUMO

Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.

12.
Bioinspir Biomim ; 12(3): 036010, 2017 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-28349896

RESUMO

Inspired by the lateral line of aquatic vertebrates, an artificial lateral line (ALL) system can localize and track an underwater moving object by analyzing the ambient flow caused by its motion. There are several studies on object detection, localization and tracking by ALL systems, but only a few have investigated the optimal design of the ALL system, the one that on average provides the highest characterization accuracy. Design optimization is particularly important because the uncertainties in the employed flow model and in sensor measurements deteriorate the reliability of sensing. This study investigates the optimal design of the ALL system in three-dimensional (3D) space for dipole source characterization. It highlights some challenges specific to the 3D setting and demonstrates the shortcomings of the designs in which all sensors and their sensing directions are in the same plane. As an alternative, it proposes two design concepts, called 'Offset Strategy' and 'Angle Strategy' to overcome these shortcomings. It investigates potentials of having a swarm of cooperative ALLs as well. It performs design optimization in the presence of sensor and model uncertainties and analyzes the trade-off between the number of sensors and characterization accuracy. The obtained solutions are analyzed to reveal their strategies in solving the problem efficiently. The dependency of the optimized solutions on the uncertainties is also demonstrated.


Assuntos
Materiais Biomiméticos , Biomimética , Desenho de Equipamento/normas , Sistema da Linha Lateral , Água , Animais , Peixes , Hidrodinâmica , Imersão , Mecanorreceptores , Reprodutibilidade dos Testes , Reologia
13.
IEEE Trans Cybern ; 45(10): 2076-88, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25494518

RESUMO

Multiobjective evolutionary algorithm based on decomposition (MOEA/D), which bridges the traditional optimization techniques and population-based methods, has become an increasingly popular framework for evolutionary multiobjective optimization. It decomposes a multiobjective optimization problem (MOP) into a number of optimization subproblems. Each subproblem is handled by an agent in a collaborative manner. The selection of MOEA/D is a process of choosing solutions by agents. In particular, each agent has two requirements on its selected solution: one is the convergence toward the efficient front, the other is the distinction with the other agents' choices. This paper suggests addressing these two requirements by defining mutual-preferences between subproblems and solutions. Afterwards, a simple yet effective method is proposed to build an interrelationship between subproblems and solutions, based on their mutual-preferences. At each generation, this interrelationship is used as a guideline to select the elite solutions to survive as the next parents. By considering the mutual-preferences between subproblems and solutions (i.e., the two requirements of each agent), the selection operator is able to balance the convergence and diversity of the search process. Comprehensive experiments are conducted on several MOP test instances with complicated Pareto sets. Empirical results demonstrate the effectiveness and competitiveness of our proposed algorithm.

14.
Biosystems ; 72(1-2): 111-29, 2003 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-14642662

RESUMO

In the area of bioinformatics, the identification of gene subsets responsible for classifying available disease samples to two or more of its variants is an important task. Such problems have been solved in the past by means of unsupervised learning methods (hierarchical clustering, self-organizing maps, k-mean clustering, etc.) and supervised learning methods (weighted voting approach, k-nearest neighbor method, support vector machine method, etc.). Such problems can also be posed as optimization problems of minimizing gene subset size to achieve reliable and accurate classification. The main difficulties in solving the resulting optimization problem are the availability of only a few samples compared to the number of genes in the samples and the exorbitantly large search space of solutions. Although there exist a few applications of evolutionary algorithms (EAs) for this task, here we treat the problem as a multiobjective optimization problem of minimizing the gene subset size and minimizing the number of misclassified samples. Moreover, for a more reliable classification, we consider multiple training sets in evaluating a classifier. Contrary to the past studies, the use of a multiobjective EA (NSGA-II) has enabled us to discover a smaller gene subset size (such as four or five) to correctly classify 100% or near 100% samples for three cancer samples (Leukemia, Lymphoma, and Colon). We have also extended the NSGA-II to obtain multiple non-dominated solutions discovering as much as 352 different three-gene combinations providing a 100% correct classification to the Leukemia data. In order to have further confidence in the identification task, we have also introduced a prediction strength threshold for determining a sample's belonging to one class or the other. All simulation results show consistent gene subset identifications on three disease samples and exhibit the flexibilities and efficacies in using a multiobjective EA for the gene subset identification task.


Assuntos
Algoritmos , Análise por Conglomerados , Diagnóstico por Computador/métodos , Perfilação da Expressão Gênica/métodos , Testes Genéticos/métodos , Neoplasias/diagnóstico , Neoplasias/genética , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Análise de Sequência de DNA/métodos , Neoplasias do Colo/classificação , Neoplasias do Colo/diagnóstico , Neoplasias do Colo/genética , Regulação Neoplásica da Expressão Gênica/genética , Humanos , Leucemia/classificação , Leucemia/diagnóstico , Leucemia/genética , Linfoma/classificação , Linfoma/diagnóstico , Linfoma/genética , Neoplasias/classificação , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Alinhamento de Sequência/métodos
15.
Evol Comput ; 14(4): 463-94, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17109607

RESUMO

In optimization studies including multi-objective optimization, the main focus is placed on finding the global optimum or global Pareto-optimal solutions, representing the best possible objective values. However, in practice, users may not always be interested in finding the so-called global best solutions, particularly when these solutions are quite sensitive to the variable perturbations which cannot be avoided in practice. In such cases, practitioners are interested in finding the robust solutions which are less sensitive to small perturbations in variables. Although robust optimization is dealt with in detail in single-objective evolutionary optimization studies, in this paper, we present two different robust multi-objective optimization procedures, where the emphasis is to find a robust frontier, instead of the global Pareto-optimal frontier in a problem. The first procedure is a straightforward extension of a technique used for single-objective optimization and the second procedure is a more practical approach enabling a user to set the extent of robustness desired in a problem. To demonstrate the differences between global and robust multi-objective optimization principles and the differences between the two robust optimization procedures suggested here, we develop a number of constrained and unconstrained test problems having two and three objectives and show simulation results using an evolutionary multi-objective optimization (EMO) algorithm. Finally, we also apply both robust optimization methodologies to an engineering design problem.


Assuntos
Algoritmos , Evolução Biológica , Biologia Computacional , Modelos Genéticos
16.
Evol Comput ; 13(4): 501-25, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-16297281

RESUMO

Since the suggestion of a computing procedure of multiple Pareto-optimal solutions in multi-objective optimization problems in the early Nineties, researchers have been on the look out for a procedure which is computationally fast and simultaneously capable of finding a well-converged and well-distributed set of solutions. Most multi-objective evolutionary algorithms (MOEAs) developed in the past decade are either good for achieving a well-distributed solutions at the expense of a large computational effort or computationally fast at the expense of achieving a not-so-good distribution of solutions. For example, although the Strength Pareto Evolutionary Algorithm or SPEA (Zitzler and Thiele, 1999) produces a much better distribution compared to the elitist non-dominated sorting GA or NSGA-II (Deb et al., 2002a), the computational time needed to run SPEA is much greater. In this paper, we evaluate a recently-proposed steady-state MOEA (Deb et al., 2003) which was developed based on the epsilon-dominance concept introduced earlier(Laumanns et al., 2002) and using efficient parent and archive update strategies for achieving a well-distributed and well-converged set of solutions quickly. Based on an extensive comparative study with four other state-of-the-art MOEAs on a number of two, three, and four objective test problems, it is observed that the steady-state MOEA is a good compromise in terms of convergence near to the Pareto-optimal front, diversity of solutions, and computational time. Moreover, the epsilon-MOEA is a step closer towards making MOEAs pragmatic, particularly allowing a decision-maker to control the achievable accuracy in the obtained Pareto-optimal solutions.


Assuntos
Algoritmos , Evolução Biológica , Simulação por Computador , Modelos Teóricos , Fatores de Tempo
17.
Evol Comput ; 10(4): 371-95, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-12450456

RESUMO

Due to increasing interest in solving real-world optimization problems using evolutionary algorithms (EAs), researchers have recently developed a number of real-parameter genetic algorithms (GAs). In these studies, the main research effort is spent on developing an efficient recombination operator. Such recombination operators use probability distributions around the parent solutions to create an offspring. Some operators emphasize solutions at the center of mass of parents and some around the parents. In this paper, we propose a generic parent-centric recombination operator (PCX) and a steady-state, elite-preserving, scalable, and computationally fast population-alteration model (we call the G3 model). The performance of the G3 model with the PCX operator is investigated on three commonly used test problems and is compared with a number of evolutionary and classical optimization algorithms including other real-parameter GAs with the unimodal normal distribution crossover (UNDX) and the simplex crossover (SPX) operators, the correlated self-adaptive evolution strategy, the covariance matrix adaptation evolution strategy (CMA-ES), the differential evolution technique, and the quasi-Newton method. The proposed approach is found to consistently and reliably perform better than all other methods used in the study. A scale-up study with problem sizes up to 500 variables shows a polynomial computational complexity of the proposed approach. This extensive study clearly demonstrates the power of the proposed technique in tackling real-parameter optimization problems.


Assuntos
Algoritmos , Evolução Biológica , Metodologias Computacionais , Modelos Teóricos , Reprodutibilidade dos Testes
18.
Evol Comput ; 10(3): 263-82, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-12227996

RESUMO

Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Pareto-optimal solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms can progress towards the Pareto-optimal set with a widely spread distribution of solutions. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Pareto-optimal solutions with a wide diversity among the solutions. In this paper, we discuss why a number of earlier MOEAs do not have such properties. Based on the concept of epsilon-dominance, new archiving strategies are proposed that overcome this fundamental problem and provably lead to MOEAs that have both the desired convergence and distribution properties. A number of modifications to the baseline algorithm are also suggested. The concept of epsilon-dominance introduced in this paper is practical and should make the proposed algorithms useful to researchers and practitioners alike.


Assuntos
Algoritmos , Evolução Biológica , Modelos Biológicos , Simulação por Computador , Variação Genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA