-
Notifications
You must be signed in to change notification settings - Fork 86
GSoC 2013 Constraints
Evolutionary Optimization techniques have been successfully applied in a wide range of real-life optimization problems and a variety of them have been already integrated into the PaGMO/PyGMO collection of heuristic optimization methods. They are though still limited to solve unconstrained optimization problems. Several constraint handling approaches for evolutionary algorithms have been developed in the recent years and divided in five main classes (survey of techniques in [1]): Penalty Methods, Special Representations and Operators, Repair Algorithms, Separation of Objective and Constraints and Hybrid Methods. Among them few approaches have been identified as promising techniques to be integrated into the existing PaGMO/PyGMO infrastructure. In particular:
- Four Penalty Methods: Death penalty (infeasible individual are rejected in the selection procedure regardless their level of infeasibility, [2]); Self Adaptive Fitness Formulation (the fitness of the infeasible individual is increased as to favor those solutions which are nearly feasible and also have a good objective function value, [3]); Stochastic Ranking (balancing the influence of objective function and penalty function in assigning the individual fitness during the ranking process of the population, [4]) and Co-evolutionary Penalty (collaborative evolution of two populations, the second one encoding the penalty factors to be used in the fitness evaluation of the individuals of the first population, [5]).
- Separation of Objective and Constraints: Multi-objective Optimization (the constrained optimization problem is transformed into a multiobjective optimization problem where each constraint violation represents an additional objective [6]).
- Hybrid Methods: Immune System (part of the feasible population is transformed into antigen population, the infeasible solutions generate antibodies those evolving, with particular immune operators, until they become sufficiently similar to the antigen population and then recombined to continue with the evolution [7]); Local Repair Techniques (the infeasible solutions are locally repaired with gradient based unconstrained optimization techniques minimizing the sum of the constraints violations [8]).
- Implement the selected constraints handling techniques listed above into the PaGMO/PyGMO framework.
- Implement the 24 constrained benchmark problems presented in [9].
- Test the Evolutionary Algorithms of PaGMO/PyGMO (Differential Evolution, Particle Swarm Optimization, Genetic Algorithm, Artificial Bee Colony and Covariance Matrix Adaptation-ES) against all the constraints handling techniques.
In other words, the current project consists on implementing a set of benchmarking problems for constrained optimization, different kind of constraints handling techniques and finally run all the evolutionary algorithms in PyGMO/PaGMO with all the constraints handling techniques (implemented during the project) on a set of problems. As a result, a full picture on the performance and best practices for constraints handling for all the evolutionary algorithms available in PyGMO/PaGMO (Differential Evolution, Particle Swarm Optimization, Genetic Algorithm, Artificial Bee Colony, Covariance Matrix Adaptation-ES...) will be drawn, and a stable and automatized constraints handling techniques benchmarking will be set up for people interested in further studies. Once the performance of the algorithms will be assessed, and certainly after the coding time given by GSoC, typical trajectories optimization problem will be set up to effectively test the performances of the constraints handling techniques on real mission analysis problems.
Implementation Details: Each approach has a different implementation strategy:
- Death Penalty and Multi-objective Optimization redefine statically the optimization problem (i.e. the new optimization problem is the same during all the evolutions), two meta-problems need to be created to cope with these techniques. The meta-problem takes the original constrained problem as input and generate a new unconstrained problem (respectively single and multi-objective).
- Self Adaptive and Stochastic Ranking redefine dynamically the optimization problem (information are extrapolated from the current population to assign the new fitness function), a meta-algorithm that internally invokes the evolution of the population on a modified problem need to be implemented. The meta-algorithm is constructed given an heuristic algorithm and the constrained problem.
- Co-Evolutionary Penalty and Immune System are meta algorithms as well. They are constructed as before with an heuristic algorithm and a constrained problem. Internally the method performs a preprocessing on the population and defines a modified problem suitable for the evolution.
- Local Repair Technique is a meta-algorithm that takes as argument two algorithms (an heuristic and a local one), it instantiates a new population (the infeasible solutions) with an unconstrained problem defined by the sum of the violation of the constraints and invokes the evolve method of the local technique to solve it.
References
[1] Coello Coello C.A. Theoretical and Numerical Constraints-Handling Techniques Used with Evolutionary Algorithms: A Survey of the State of the Art, Comput. Methods Appl. Mech Engrg. 191, pp. 1245-1287, 2000.
[2] Bäck, T., Hoffmeister, F. and Schwell, H.P. A Survey of evolution strategies, Proceedings of the Fourth International Conference on Genetic Algorithms, Morgan Kaufmann, 2-9, 1991.
[3] Farmani, R. and Wright, J. Self-adaptive fitness formulation for constrained optimization, IEEE Transactions on Evolutionary Computation, 7 (5), pp. 445- 455, 2003.
[4] Runarsson T.P. and Xin Yao, Stochastic Ranking for Constrained Evolutionary Optimization, IEEE Transactions on Evolutionary Computation, Vol. 4, No. 3, pp. 284-294, 2000.
[5] Coello Coello C.A. Use of Self-Adaptive Penalty Approach for Engineering Optimization Problems, Comput. Ind. 41(2), pp. 113-127, 2000.
[6] Coello Coello C.A. Treating Constraints as Objectives for Single-Objective Evolutionary Optimization, Engrg. Optim., 32(3), pp. 275-308, 2000.
[7] Hajela P. and Lee J. Constrained Genetic Search via Schema Adaptation. An Immune Network Solution, Proceedings of the First World Congress of Structural and Multidisciplinary Optimization, pp. 915-920, 1995.
[8] Belur S.V. CORE: Constrained Optimization by Random Evolution, Late Breaking Papers at the Genetic Programming Conference, pp. 280-286, 1997.
[9] J. J. Liang, T. P. Runarsson, E. Mezura-Montes, M. Clerc, P. N. Suganthan, C. A. Coello Coello, K. Deb, Problem Definitions and Evaluation Criteria for the CEC 2006, Special Session on Constrained Real-Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2006.
Skills Required: Good knowledge in C++ and Python
Skills that help: Being not afraid of going beyond limits
Mentors: Annalisa Riccardi and Marcus Märtens
Student: Jérémie Labroquère
April 20 – May 26 (Before the official coding time):
- Get familiar with the code specificities and already implemented algorithms/examples.
- Read the papers provided on the constraints handling techniques.
- Set up the meta-problem structure
- Implement the test cases g04 and g08 (quadratic and multimodal objective functions with non linear inequality constraints inequality constrains, death penalty problems certainly won't be able solve equality constraints) + documentation.
- Implement the death penalty constraint to get used to the code and to check that everything is understood. Add documentation for this first constraint handling.
- Validate the death penalty constraint with the g04 and g08 problem with all EA algorithm.
- Link the problem and the algorithm to PyGMO. Might get deeper into boost::python and into already linked problems to understand and apply the logic used.
May 27 (selection of applicants and slots is determined) – June 16:
- Set up with the mentors the future goals, discuss about the first optimization performed under constraint (remarks on the code...) and choose the references to be used for algorithms implementation.
- Implementation of all the test cases/problems, simple validation with the best given value //done, to be checked
- Set up a unit test to check all the constraints techniques with all the test cases, giving a comprehensive result array of best algorithms //first version done, need to add method to retrieve number equality and inequalities constraints in the base class
- Best solutions of algorithms to be moved in the base class //done, waiting for last changes in best known vectors
- Expose initialize_best() to python //done, waiting for last changes in best known vectors
- Extend the tutorial 1 by explaining how to implement the initialization of the best known solution and the compare_fitness_impl function on a maximization problem //waiting for last changes in best known vectors
- Address the ticket 21 //done, to be checked - cec2006 problems updated
- Validation
- Implementation of Self Adaptive Fitness Formulation
- Implementation of Co-Evolutionary Penalty
- Validation
- Mid term corrections on the implemented strategies
- Implementation of Immune system
- Validation
- Modify population class to add repairing faculty
- Prepare meta-algorithm for local repairing techniques
- Implementation of Local Repair Technique
- Validation
- Validate the problems on all the test cases
- Analyze the validation results (eventually write a report)
- Write a tutorial on constraints handling implementation
- Set up a benchmark to test for all the algorithms with all the constraints handling against more complicated problems
- Improvement functionality, bug removal following benchmarking results.
- Set up unit testing for all the constraints handling techniques if some are missing
- Final documentation correction
- One weeks are kept in case of delay.
- Understand and set up trajectory optimization test cases (if multiple)
- Run the most promising constraints handling techniques/all the algorithms with all the constraints handling against the test cases. Choosing a certain constraints handling technique or all of them will depends on the problem function cost.
- Eventually write a report/paper
This section contains the different algorithms that will be implemented in PaGMO/PyGMO.
'''Description'''
The algorithm splits the initial constraints problem into m+1 problems, with m the number of constraints. The initial size for all the sub populations is fixed (i.e.: 10 for 23 constraints, 20 for 8 constraints, 40 for 4 constraints).
The original algorithm is set as follow:
- The population of size N is generated.
- The population of size N is randomly divided into m+1 equal sized sub-populations Pi.
- Each solution in subpopulation Pi is assigned a fitness value based on the following:
- The first subset is guided by the objective function as fitness function.
- For the other ones, we use:
else if v ≠ 0 then fitness = -v
else fitness = f
where gj(X) is the constraint corresponding to the sub population j+1 and v the number of violated constraints.
- Each of the sub populations are evolved with the selected multi-objective algorithm.
- Objective function
- For all the other ones:
else if v ≠ 0 then fitness = -v
else fitness = f
A question remains: for minimization problems, if the constraint gj(X) is not satisfied, then it should be gj(X) > 0. Thus at first we would like to minimize gj(X) until it is satisfied. Then if the constraint is satisfied, the aim is to reduce the number of non satisfied constraints. What happens if in one population, some individual have the current gj(X) satisfied AND some other don't? Which one wins? The same question holds with the fitness f. For a minimization problem, I suppose then that fitness for constraints objectives should be redefined as:
if gj(X) > 0 then fitness = gj(X)
else if v ≠ 0 then fitness = v
else fitness = f
Reference: Coello Coello C.A. Treating Constraints as Objectives for Single-Objective Evolutionary Optimization, Engrg. Optim., 32(3), pp. 275-308, 2000.
Ghosh, A., & Dehuri, S. (2005). Evolutionary algorithms for multi-criteria optimization: A Survey. International journal.
Implementation
NOT FINISHED
The implementation is meta-problem (don't know if this is totally possible...).
Name of class: problems/constrained_multi_objective // I just have no idea for that, avoiding the "constrained" word...! The syntax is not correct here, it just gives the idea of the implementation
Constructor: IN: base problem OUT: new problem with a vector f[number]
Documentation: The population size should be a divider of the number of constraints+1
problems/constrained_multi_objective.cpp
void constrained_multi_objective::constrained_multi_objective (problem &initial_problem, int initial_population_size) :base(fitness size = number_of_constraints + 1) { check (initial_population_size%(number_of_constraints + 1) == 0); m_sub_population_size = initial_population_size%(number_of_constraints + 1);
// depending on the number of }
void constrained_multi_objective::objfun_impl(fitness_vector &f, const decision_vector &x) const { // gives decision_vector a position in the // stores the position in a set int subset_position = this->subset(decision_vector);
switch(subset_position): case 0: // what value do we give for the other constraints? feel like we should split the problems before! f[0] = m_original_problem->objfun(f, x);
for(int i=0;i < number_of_constraints; i++) { if(!test_constraint(constraints_vector ,i)) { f[i] = m_original_problem->objfun(f, x); } }
int subset(const decision_vector &x) { // returns the set x lives in // if the decision vector do not live in a subset, then it is randomly added to one which has the minimum number of elements }
'''if''' g<sub>j</sub>('''X''') > 0 '''then''' fitness = g<sub>j</sub>('''X''')<br/>
else if v ≠ 0 then fitness = v
else fitness = f
}
Constraints violation are defined as single infeasibility measure which is used to form a two stage penalty that is applied on infeasible solutions.
If we have the definition of the constraints as:
<math>c_j(\vec{X}) = max(0, |h_j(\vec{X})| - \delta), \quad if \quad 1 \leq j \leq q</math> <math>c_j(\vec{X}) = max(0, g_j(\vec{X})), \quad if \quad q+1 \leq j \leq m</math>Then the infeasibility measure is defined as:
<math>i(\vec{X}) = \left( \sum_{j=1}^{m} \frac{c_{j}(\vec{X})}{c_{max,j}} \right) / m</math><math>c_{max,j}</math> is the maximum value of the jth constraint violation in the current population.
The algorithm is as follow:
if(p contains all feasible solutions) { apply_penalty = false; // might just skip all the computations as the penalty technique is not needed }
if(p contains has at least one feasible solution) { x_hat_down = feasible individual with lowest objective value in p; // determination of x_hat_up if(one or more infeasible solutions in p have an objective function <= x_hat_down) { if(number of infeasible individuals having highest i(X) == 1) { x_hat_up = highest i(X) in p and objective function value < x_hat_down; } else { x_hat_up = solution with maximum infeasibility value and the higher objective function values } } else if (all infeasible solutions have an objective function > x_hat_down) { if(number of infeasible individuals having highest i(X) == 1) { x_hat_up = highest i(X) in p; } else { x_hat_up = solution with maximum infeasibility value and the higher objective function values } apply_penalty = false; } else { x_hat_down = infeasible individual with lowest i(X) in p; '''// here I suppose we should do the same and test if lowest i(X) == 1''' x_hat_up = infeasible individual with highest i(X) in p; // same comment }
// evaluate the scaling factor gamma = equation (9)
for each infeasible solutions { // is the second penalty applied to infeasible solutions only? I suppose so but not defined in the paper... fitness_penalty_two(X) = equation(8) } // back to fitness for each infeasible solutions { // is this only for non feasible solutions? Not defined in the paper as well... fitness(X) = maximum penalized value in p - fitness_penalty_two(X) // need to store the maximum penalized value, should be done in the previous loop ? }
Reference: Farmani, R., & Wright, J. A. (2003). Self-adaptive fitness formulation for constrained optimization. Evolutionary Computation, IEEE Transactions on, 7(5), 445-455. garbageSelf-adaptive fitness formulation for constrained optimization. Evolutionary Computation, IEEE Transactions on, 7(5), 445-455.