RESUMO
The highly multimodal Rastrigin test function is analyzed by deriving a new aggregated progress rate measure. It is derived as a function of the residual distance to the optimizer by assuming normally distributed positional coordinates around the global optimizer. This assumption is justified for successful ES-runs operating with sufficiently slow step-size adaptation. The measure enables the investigation of further convergence properties. For moderately large mutation strengths a characteristic distance-dependent Rastrigin noise floor is derived. For small mutation strengths local attraction is analyzed and an escape condition is established. Both mutation strength regimes combined pose a major challenge optimizing the Rastrigin function, which can be counteracted by increasing the population size. Hence, a population scaling relation to achieve high global convergence rates is derived which shows good agreement with experimental data.
RESUMO
A model is presented that allows for the calculation of the success probability by which a vanilla Evolution Strategy converges to the global optimizer of the Rastrigin test function. As a result a population size scaling formula will be derived that allows for an estimation of the population size needed to ensure a high convergence security depending on the search space dimensionality.
RESUMO
A first and second order progress rate analysis was conducted for the intermediate multi-recombinative Evolution Strategy (µ/µI, λ)-ES with isotropic scale-invariant mutations on the highly multimodal Rastrigin test function. Closed-form analytic solutions for the progress rates are obtained in the limit of large dimensionality and large populations. The first order results are able to model the one-generation progress including local attraction phenomena. Furthermore, a second order progress rate is derived yielding additional correction terms and further improving the progress model. The obtained results are compared to simulations and show good agreement, even for moderately large populations and dimensionality. The progress rates are applied within a dynamical systems approach, which models the evolution using difference equations. The obtained dynamics are compared to real averaged optimization runs and yield good agreement. The results improve further when dimensionality and population size are increased. Local and global convergence is investigated within given model showing that large mutations are needed to maximize the probability of global convergence, which comes at the expense of efficiency. An outlook regarding future research goals is provided.
RESUMO
A first order progress rate is derived for the intermediate multi-recombinative Evolution Strategy (µ/µI, λ)-ES on the highly multimodal Rastrigin test function. The progress is derived within a linearized model applying the method of so-called noisy order statistics. To this end, the mutation-induced variance of the Rastrigin function is determined. The obtained progress approximation is compared to simulations and yields strengths and limitations depending on mutation strength and distance to the optimizer. Furthermore, the progress is iterated using the dynamical systems approach and compared to averaged optimization runs. The property of global convergence within given approximation is discussed. As an outlook, the need of an improved first order progress rate as well as the extension to higher order progress including positional fluctuations is explained.
RESUMO
Theoretical analyses of evolution strategies are indispensable for gaining a deep understanding of their inner workings. For constrained problems, rather simple problems are of interest in the current research. This work presents a theoretical analysis of a multi-recombinative evolution strategy with cumulative step size adaptation applied to a conically constrained linear optimization problem. The state of the strategy is modeled by random variables and a stochastic iterative mapping is introduced. For the analytical treatment, fluctuations are neglected and the mean value iterative system is considered. Nonlinear difference equations are derived based on one-generation progress rates. Based on that, expressions for the steady state of the mean value iterative system are derived. By comparison with real algorithm runs, it is shown that for the considered assumptions, the theoretical derivations are able to predict the dynamics and the steady state values of the real runs.
Assuntos
Algoritmos , Evolução Biológica , Biologia Computacional , Simulação por Computador , Modelos Lineares , Processos EstocásticosRESUMO
The behavior of the [Formula: see text]-Evolution Strategy (ES) with cumulative step size adaptation (CSA) on the ellipsoid model is investigated using dynamic systems analysis. At first a nonlinear system of difference equations is derived that describes the mean value evolution of the ES. This system is successively simplified to finally allow for deriving closed-form solutions of the steady state behavior in the asymptotic limit case of large search space dimensions. It is shown that the system exhibits linear convergence order. The steady state mutation strength is calculated, and it is shown that compared to standard settings in [Formula: see text] self-adaptive ESs, the CSA control rule allows for an approximately [Formula: see text]-fold larger mutation strength. This explains the superior performance of the CSA in non-noisy environments. The results are used to derive a formula for the expected running time. Conclusions regarding the choice of the cumulation parameter c and the damping constant D are drawn.
Assuntos
Algoritmos , Evolução Biológica , Modelos Estatísticos , Biologia Computacional , Simulação por Computador , Modelos Lineares , Modelos Genéticos , Mutação , Dinâmica não Linear , Processos EstocásticosRESUMO
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
Assuntos
Algoritmos , Metodologias Computacionais , Teoria da Informação , Modelos Teóricos , MatemáticaRESUMO
To theoretically compare the behavior of different algorithms, compatible performance measures are necessary. Thus in the first part, an analysis approach, developed for evolution strategies, was applied to simultaneous perturbation stochastic approximation on the noisy sphere model. A considerable advantage of this approach is that convergence results for non-noisy and noisy optimization can be obtained simultaneously. Next to the convergence rates, optimal step sizes and convergence criteria for 3 different noise models were derived. These results were validated by simulation experiments. Afterward, the results were used for a comparison with evolution strategies on the sphere model in combination with the 3 noise models. It was shown that both strategies perform similarly, with a slight advantage for SPSA if optimal settings are used and the noise strength is not too large.
RESUMO
This paper studies the performance of multi-recombinative evolution strategies using isotropically distributed mutations with cumulative step length adaptation when applied to optimising cigar functions. Cigar functions are convex-quadratic objective functions that are characterised by the presence of only two distinct eigenvalues of their Hessian, the smaller one of which occurs with multiplicity one. A simplified model of the strategy's behaviour is developed. Using it, expressions that approximately describe the stationary state that is attained when the mutation strength is adapted are derived. The performance achieved by cumulative step length adaptation is compared with that obtained when using optimally adapted step lengths.
Assuntos
Algoritmos , Modelos Genéticos , Ferramenta de Busca , Inteligência Artificial , Simulação por ComputadorRESUMO
Evolutionary algorithms are frequently applied to dynamic optimization problems in which the objective varies with time. It is desirable to gain an improved understanding of the influence of different genetic operators and of the parameters of a strategy on its tracking performance. An approach that has proven useful in the past is to mathematically analyze the strategy's behavior in simple, idealized environments. The present paper investigates the performance of a multiparent evolution strategy that employs cumulative step length adaptation for an optimization task in which the target moves linearly with uniform speed. Scaling laws that quite accurately describe the behavior of the strategy and that greatly contribute to its understanding are derived. It is shown that in contrast to previously obtained results for a randomly moving target, cumulative step length adaptation fails to achieve optimal step lengths if the target moves in a linear fashion. Implications for the choice of population size parameters are discussed.
Assuntos
Algoritmos , Modelos Genéticos , Adaptação Biológica , Evolução Biológica , Genética Populacional , MutaçãoRESUMO
It is known that, in the absence of noise, no improvement in local performance can be gained from retaining candidate solutions other than the best one. Yet, it has been shown experimentally that, in the presence of noise, operating with a non-singular population of candidate solutions can have a marked and positive effect on the local performance of evolution strategies. So as to determine the reasons for the improved performance, we have studied the evolutionary dynamics of the (micro ,lambda)-ES in the presence of noise. Considering a simple, idealized environment, we have developed a moment-based approach that uses recent results involving concomitants of selected order statistics. This approach yields an intuitive explanation for the performance advantage of multi-parent strategies in the presence of noise. It is then shown that the idealized dynamic process considered does bear relevance to optimization problems in high-dimensional search spaces.
Assuntos
Algoritmos , Evolução Biológica , Simulação por Computador , Modelos Genéticos , Modelos EstatísticosRESUMO
Cumulative step-size adaptation (CSA) based on path length control is regarded as a robust alternative to the standard mutative self-adaptation technique in evolution strategies (ES), guaranteeing an almost optimal control of the mutation operator. This paper shows that the underlying basic assumption in CSA--the perpendicularity of expected consecutive steps--does not necessarily guarantee optimal progress performance for (mu/mu(I), lambda) intermediate recombinative ES.