Ph.D
Group : Learning and Optimization
Analyse Markovienne des Stratégies d'Évolution
Starts on 01/10/2011
Advisor : HANSEN, Nikolaus
[AUGER Anne]
Funding :
Affiliation : Université Paris-Saclay
Laboratory : LRI AO
Defended on 24/09/2015, committee :
Directeurs de thèse :
Mr Nikolaus Hansen, directeur de recherche, Inria, Université Paris-Sud
Mme Anne Auger, chargée de recherche, Inria, Université Paris-Sud
Rapporteurs:
Mr Dirk Arnold, professor, Faculty of Computer science, Dalhousie University
Mr Tobias Glashmachers, junior professor, Institut für Neuroinformatik, Ruhr-Universität Bochum
Examinateurs:
Mme Gersende Fort, directrice de recherche, CNRS
Mr François Yvon, professeur, Limsi, Université Paris-Sud
Research activities :
Abstract :
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.
ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.
This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.
The contributions of this thesis are then presented:
o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.
o Then different ESs are analyzed on different problems. We study a (1,lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.
Finally, these results are summed-up, discussed, and perspectives for future work are explored.