From 36c109c8d3d371c346e2dd9f1d3e965f15494b51 Mon Sep 17 00:00:00 2001 From: jendrikseipp Date: Fri, 10 May 2024 08:19:56 +0000 Subject: [PATCH] deploy: 67f4d943a784fb4e19a92f1a57eb96978d61517d --- SearchAlgorithm/index.html | 11 +---------- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes 3 files changed, 2 insertions(+), 11 deletions(-) diff --git a/SearchAlgorithm/index.html b/SearchAlgorithm/index.html index f6cb9d368..2eb242841 100644 --- a/SearchAlgorithm/index.html +++ b/SearchAlgorithm/index.html @@ -1009,17 +1009,8 @@

Dump the reachable state space.

-
dump_reachable_search_space(verbosity=normal)
+
dump_reachable_search_space()
 
-
    -
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      -
    • silent: only the most basic output
    • -
    • normal: relevant information to monitor progress
    • -
    • verbose: full output
    • -
    • debug: like verbose with additional debug output
    • -
    -
  • -
eager(open, reopen_closed=false, f_eval=<none>, preferred=[], pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)
 
diff --git a/search/search_index.json b/search/search_index.json index 66627c20b..a17fd78ea 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

Choose a plugin type on the left to see its documentation.

"},{"location":"AbstractTask/","title":"AbstractTask","text":""},{"location":"AbstractTask/#cost-adapted_task","title":"Cost-adapted task","text":"

A cost-adapting transformation of the root task.

adapt_costs(cost_type=normal)\n
  • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
    • normal: all actions are accounted for with their real cost
    • one: all actions are accounted for as unit cost
    • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
"},{"location":"AbstractTask/#no_transform","title":"no_transform","text":"
no_transform()\n
"},{"location":"AbstractionGenerator/","title":"AbstractionGenerator","text":"

Create abstractions for cost partitioning heuristics.

"},{"location":"AbstractionGenerator/#cartesian_abstraction_generator","title":"Cartesian abstraction generator","text":"
cartesian(subtasks=[landmarks(order=random), goals(order=random)], max_states=infinity, max_transitions=1M, max_time=infinity, pick_flawed_abstract_state=batch_min_h, pick_split=max_cover, tiebreak_split=max_refined, memory_padding=500, dot_graph_verbosity=silent, random_seed=-1, max_concrete_states_per_abstract_state=infinity, max_state_expansions=1M, verbosity=normal)\n
  • subtasks (list of SubtaskGenerator): subtask generators
  • max_states (int [1, infinity]): maximum sum of abstract states over all abstractions
  • max_transitions (int [0, infinity]): maximum sum of state-changing transitions (excluding self-loops) over all abstractions
  • max_time (double [0.0, infinity]): maximum time in seconds for building abstractions
  • pick_flawed_abstract_state ({first, first_on_shortest_path, random, min_h, max_h, batch_min_h}): flaw-selection strategy
    • first: Consider first encountered flawed abstract state and a random concrete state.
    • first_on_shortest_path: Follow the arbitrary solution in the shortest path tree (no flaw search). Consider first encountered flawed abstract state and a random concrete state.
    • random: Collect all flawed abstract states and then consider a random abstract state and a random concrete state.
    • min_h: Collect all flawed abstract states and then consider a random abstract state with minimum h value and a random concrete state.
    • max_h: Collect all flawed abstract states and then consider a random abstract state with maximum h value and a random concrete state.
    • batch_min_h: Collect all flawed abstract states and iteratively refine them (by increasing h value). Only start a new flaw search once all remaining flawed abstract states are refined. For each abstract state consider all concrete states.
  • pick_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • tiebreak_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy for breaking ties
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • memory_padding (int [0, infinity]): amount of extra memory in MB to reserve for recovering from out-of-memory situations gracefully. When the memory runs out, we stop refining and start the search. Due to memory fragmentation, the memory used for building the abstraction (states, transitions, etc.) often can't be reused for things that require big continuous blocks of memory. It is for this reason that we require a rather large amount of memory padding by default.
  • dot_graph_verbosity ({silent, write_to_console, write_to_file}): verbosity of printing/writing dot graphs
    • silent:
    • write_to_console:
    • write_to_file:
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • max_concrete_states_per_abstract_state (int [1, infinity]): maximum number of flawed concrete states stored per abstract state
  • max_state_expansions (int [1, infinity]): maximum number of state expansions per flaw search
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
"},{"location":"AbstractionGenerator/#projections","title":"projections","text":"

Projection generator

projections(patterns=<none>, dominance_pruning=false, combine_labels=true, create_complete_transition_system=false, verbosity=normal)\n
  • patterns (PatternCollectionGenerator): pattern generation method
  • dominance_pruning (bool): prune dominated patterns
  • combine_labels (bool): group labels that only induce parallel transitions
  • create_complete_transition_system (bool): create explicit transition system
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
"},{"location":"ConstraintGenerator/","title":"ConstraintGenerator","text":""},{"location":"ConstraintGenerator/#delete_relaxation_constraints","title":"Delete relaxation constraints","text":"

Operator-counting constraints based on the delete relaxation. By default the constraints encode an easy-to-compute relaxation of h^+^. With the right settings, these constraints can be used to compute the optimal delete-relaxation heuristic h^+^ (see example below). For details, see

  • Tatsuya Imai and Alex Fukunaga. On a practical, integer-linear programming model for delete-freetasks and its use as a heuristic for cost-optimal planning. Journal of Artificial Intelligence Research 54:631-677. 2015.

    delete_relaxation_constraints(use_time_vars=false, use_integer_vars=false)

  • use_time_vars (bool): use variables for time steps. With these additional variables the constraints enforce an order between the selected operators. Leaving this off (default) corresponds to the time relaxation by Imai and Fukunaga. Switching it on, can increase the heuristic value but will increase the size of the constraints which has a strong impact on runtime. Constraints involving time variables use a big-M encoding, so they are more useful if used with integer variables.

  • use_integer_vars (bool): restrict auxiliary variables to integer values. These variables encode whether operators are used, facts are reached, which operator first achieves which fact, and in which order the operators are used. Restricting them to integers generally improves the heuristic value at the cost of increased runtime.

Example: To compute the optimal delete-relaxation heuristic h^+^, use

operatorcounting([delete_relaxation_constraints(use_time_vars=true, use_integer_vars=true)], use_integer_operator_counts=true))\n
"},{"location":"ConstraintGenerator/#lm-cut_landmark_constraints","title":"LM-cut landmark constraints","text":"

Computes a set of landmarks in each state using the LM-cut method. For each landmark L the constraint sum_{o in L} Count_o >= 1 is added to the operator-counting LP temporarily. After the heuristic value for the state is computed, all temporary constraints are removed again. For details, see

  • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

  • Blai Bonet. An admissible heuristic for SAS+ planning obtained from the state equation. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2268-2274. AAAI Press, 2013.

    lmcut_constraints()

"},{"location":"ConstraintGenerator/#saturated_posthoc_optimization_constraints_for_abstractions","title":"(Saturated) posthoc optimization constraints for abstractions","text":"
pho_abstraction_constraints(abstractions=<none>, saturated=true)\n
  • abstractions (list of AbstractionGenerator): abstraction generation methods
  • saturated (bool): use saturated instead of full operator costs in constraints
"},{"location":"ConstraintGenerator/#posthoc_optimization_constraints","title":"Posthoc optimization constraints","text":"

The generator will compute a PDB for each pattern and add the constraint h(s) <= sum_{o in relevant(h)} Count_o. For details, see

  • Florian Pommerening, Gabriele Roeger and Malte Helmert. Getting the Most Out of Pattern Databases for Classical Planning. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2357-2364. AAAI Press, 2013.

    pho_constraints(patterns=systematic(2))

  • patterns (PatternCollectionGenerator): pattern generation method

"},{"location":"ConstraintGenerator/#state_equation_constraints","title":"State equation constraints","text":"

For each fact, a permanent constraint is added that considers the net change of the fact, i.e., the total number of times the fact is added minus the total number of times is removed. The bounds of each constraint depend on the current state and the goal state and are updated in each state. For details, see

  • Menkes van den Briel, J. Benton, Subbarao Kambhampati and Thomas Vossen. An LP-based heuristic for optimal planning. In Proceedings of the Thirteenth International Conference on Principles and Practice of Constraint Programming (CP 2007), pp. 651-665. Springer-Verlag, 2007.

  • Blai Bonet. An admissible heuristic for SAS+ planning obtained from the state equation. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2268-2274. AAAI Press, 2013.

  • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

    state_equation_constraints(verbosity=normal)

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
"},{"location":"Evaluator/","title":"Evaluator","text":"

An evaluator specification is either a newly created evaluator instance or an evaluator that has been defined previously. This page describes how one can specify a new evaluator instance. For re-using evaluators, see OptionSyntax#Evaluator_Predefinitions.

If the evaluator is a heuristic, definitions of properties in the descriptions below:

  • admissible: h(s) <= h*(s) for all states s
  • consistent: h(s) <= c(s, s') + h(s') for all states s connected to states s' by an action with cost c(s, s')
  • safe: h(s) = infinity is only true for states with h*(s) = infinity
  • preferred operators: this heuristic identifies preferred operators

This feature type can be bound to variables using let(variable_name, variable_definition, expression) where expression can use variable_name. Predefinitions using --evaluator, --heuristic, and --landmarks are automatically transformed into let-expressions but are deprecated.

"},{"location":"Evaluator/#additive_heuristic","title":"Additive heuristic","text":"
add(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: yes for tasks without axioms
  • preferred operators: yes
"},{"location":"Evaluator/#blind_heuristic","title":"Blind heuristic","text":"

Returns cost of cheapest action for non-goal states, 0 for goal states

blind(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no
"},{"location":"Evaluator/#context-enhanced_additive_heuristic","title":"Context-enhanced additive heuristic","text":"
cea(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: no
  • preferred operators: yes
"},{"location":"Evaluator/#additive_cartesian_cegar_heuristic","title":"Additive Cartesian CEGAR heuristic","text":"

See the paper introducing counterexample-guided Cartesian abstraction refinement (CEGAR) for classical planning:

  • Jendrik Seipp and Malte Helmert. Counterexample-guided Cartesian Abstraction Refinement. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 347-351. AAAI Press, 2013.

and the paper showing how to make the abstractions additive:

  • Jendrik Seipp and Malte Helmert. Diverse and Additive Cartesian Abstraction Heuristics. In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 289-297. AAAI Press, 2014.

For more details on Cartesian CEGAR and saturated cost partitioning, see the journal paper

  • Jendrik Seipp and Malte Helmert. Counterexample-Guided Cartesian Abstraction Refinement for Classical Planning. Journal of Artificial Intelligence Research 62:535-577. 2018.

For a description of the incremental search, see the paper

  • Jendrik Seipp, Samuel von Allmen and Malte Helmert. Incremental Search for Counterexample-Guided Cartesian Abstraction Refinement. In Proceedings of the 30th International Conference on Automated Planning and Scheduling (ICAPS 2020), pp. 244-248. AAAI Press, 2020.

Finally, we describe advanced flaw selection strategies here:

  • David Speck and Jendrik Seipp. New Refinement Strategies for Cartesian Abstractions. In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. to appear. AAAI Press, 2022.

    cegar(subtasks=[landmarks(order=random), goals(order=random)], max_states=infinity, max_transitions=1M, max_time=infinity, pick_flawed_abstract_state=batch_min_h, pick_split=max_cover, tiebreak_split=max_refined, memory_padding=500, dot_graph_verbosity=silent, random_seed=-1, max_concrete_states_per_abstract_state=infinity, max_state_expansions=1M, use_general_costs=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • subtasks (list of SubtaskGenerator): subtask generators

  • max_states (int [1, infinity]): maximum sum of abstract states over all abstractions
  • max_transitions (int [0, infinity]): maximum sum of state-changing transitions (excluding self-loops) over all abstractions
  • max_time (double [0.0, infinity]): maximum time in seconds for building abstractions
  • pick_flawed_abstract_state ({first, first_on_shortest_path, random, min_h, max_h, batch_min_h}): flaw-selection strategy
    • first: Consider first encountered flawed abstract state and a random concrete state.
    • first_on_shortest_path: Follow the arbitrary solution in the shortest path tree (no flaw search). Consider first encountered flawed abstract state and a random concrete state.
    • random: Collect all flawed abstract states and then consider a random abstract state and a random concrete state.
    • min_h: Collect all flawed abstract states and then consider a random abstract state with minimum h value and a random concrete state.
    • max_h: Collect all flawed abstract states and then consider a random abstract state with maximum h value and a random concrete state.
    • batch_min_h: Collect all flawed abstract states and iteratively refine them (by increasing h value). Only start a new flaw search once all remaining flawed abstract states are refined. For each abstract state consider all concrete states.
  • pick_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • tiebreak_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy for breaking ties
    • random: select a random variable (among all eligible variables)
    • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
    • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
    • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
    • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
    • min_cg: order by increasing position in partial ordering of causal graph
    • max_cg: order by decreasing position in partial ordering of causal graph
    • max_cover: compute split that covers the maximum number of flaws for several concrete states.
  • memory_padding (int [0, infinity]): amount of extra memory in MB to reserve for recovering from out-of-memory situations gracefully. When the memory runs out, we stop refining and start the search. Due to memory fragmentation, the memory used for building the abstraction (states, transitions, etc.) often can't be reused for things that require big continuous blocks of memory. It is for this reason that we require a rather large amount of memory padding by default.
  • dot_graph_verbosity ({silent, write_to_console, write_to_file}): verbosity of printing/writing dot graphs
    • silent:
    • write_to_console:
    • write_to_file:
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
  • max_concrete_states_per_abstract_state (int [1, infinity]): maximum number of flawed concrete states stored per abstract state
  • max_state_expansions (int [1, infinity]): maximum number of state expansions per flaw search
  • use_general_costs (bool): allow negative costs in cost partitioning
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: yes
  • safe: yes
  • preferred operators: no
"},{"location":"Evaluator/#causal_graph_heuristic","title":"Causal graph heuristic","text":"
cg(max_cache_size=1000000, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • max_cache_size (int [0, infinity]): maximum number of cached entries per variable (set to 0 to disable cache)
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: no
  • preferred operators: yes
"},{"location":"Evaluator/#ff_heuristic","title":"FF heuristic","text":"
ff(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no
  • consistent: no
  • safe: yes for tasks without axioms
  • preferred operators: yes
"},{"location":"Evaluator/#goal_count_heuristic","title":"Goal count heuristic","text":"
goalcount(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: ignored by design
  • conditional effects: supported
  • axioms: supported

Properties:

  • admissible: no
  • consistent: no
  • safe: yes
  • preferred operators: no
"},{"location":"Evaluator/#hm_heuristic","title":"h^m heuristic","text":"
hm(m=2, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • m (int [1, infinity]): subset size
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: ignored
  • axioms: ignored

Properties:

  • admissible: yes for tasks without conditional effects or axioms
  • consistent: yes for tasks without conditional effects or axioms
  • safe: yes for tasks without conditional effects or axioms
  • preferred operators: no
"},{"location":"Evaluator/#max_heuristic","title":"Max heuristic","text":"
hmax(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: supported
  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: yes for tasks without axioms
  • consistent: yes for tasks without axioms
  • safe: yes for tasks without axioms
  • preferred operators: no
"},{"location":"Evaluator/#landmark_cost_partitioning_heuristic","title":"Landmark cost partitioning heuristic","text":"

Landmark progression is implemented according to the following paper:

  • Clemens B\u00fcchner, Thomas Keller, Salom\u00e9 Eriksson and Malte Helmert. Landmarks Progression in Heuristic Search. In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

    landmark_cost_partitioning(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true, cost_partitioning=uniform, scoring_function=max_heuristic_per_stolen_costs, alm=true, lpsolver=cplex, random_seed=-1)

  • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

  • pref (bool): enable preferred operators (see note below)
  • prog_goal (bool): Use goal progression.
  • prog_gn (bool): Use greedy-necessary ordering progression.
  • prog_r (bool): Use reasonable ordering progression.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • cost_partitioning ({optimal, uniform, opportunistic_uniform, greedy_zero_one, saturated, canonical, pho}): strategy for partitioning operator costs among landmarks
    • optimal: use optimal (LP-based) cost partitioning
    • uniform: partition operator costs uniformly among all landmarks achieved by that operator
    • opportunistic_uniform: like uniform, but order landmarks and reuse costs not consumed by earlier landmarks
    • greedy_zero_one: order landmarks and give each landmark the costs of all the operators it contains
    • saturated: like greedy_zero_one, but reuse costs not consumed by earlier landmarks
    • canonical: canonical heuristic over landmarks
    • pho: post-hoc optimization over landmarks
  • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
    • max_heuristic: order by decreasing heuristic value for the given state
    • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
    • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
  • alm (bool): use action landmarks
  • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
    • cplex: commercial solver by IBM
    • soplex: open source solver by ZIB
  • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See build instructions.

Usage with A*: We recommend to add this heuristic as lazy_evaluator when using it in the A* algorithm. This way, the heuristic is recomputed before a state is expanded, leading to improved estimates that incorporate all knowledge gained from paths that were found after the state was inserted into the open list.

Consistency: The heuristic is consistent along single paths if it is set as lazy_evaluator; i.e. when expanding s then we have h(s) <= h(s')+cost(a) for all successors s' of s reached with a. But newly found paths to s can increase h(s), at which point the above inequality might not hold anymore.

Optimal Cost Partitioning: To use cost_partitioning=optimal, you must build the planner with LP support. See build instructions.

Preferred operators: Preferred operators should not be used for optimal planning. See Landmark sum heuristic for more information on using preferred operators; the comments there also apply to this heuristic.

Supported language features:

  • action costs: supported
  • conditional_effects: supported if the LandmarkFactory supports them; otherwise not supported
  • axioms: not allowed

Properties:

  • preferred operators: yes (if enabled; see pref option)
  • admissible: yes
  • consistent: no; see document note about consistency
  • safe: yes
"},{"location":"Evaluator/#landmark_sum_heuristic","title":"Landmark sum heuristic","text":"

Landmark progression is implemented according to the following paper:

  • Clemens B\u00fcchner, Thomas Keller, Salom\u00e9 Eriksson and Malte Helmert. Landmarks Progression in Heuristic Search. In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

    landmark_sum(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

  • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

  • pref (bool): enable preferred operators (see note below)
  • prog_goal (bool): Use goal progression.
  • prog_gn (bool): Use greedy-necessary ordering progression.
  • prog_r (bool): Use reasonable ordering progression.
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Note on performance for satisficing planning: The cost of a landmark is based on the cost of the operators that achieve it. For satisficing search this can be counterproductive since it is often better to focus on distance from goal (i.e. length of the plan) rather than cost. In experiments we achieved the best performance using the option 'transform=adapt_costs(one)' to enforce unit costs.

Preferred operators: Computing preferred operators is only enabled when setting pref=true because it has a nontrivial runtime cost. Using the heuristic for preferred operators without setting pref=true has no effect. Our implementation to compute preferred operators based on landmarks differs from the description in the literature (see reference above).The original implementation computes two kinds of preferred operators:

  1. If there is an applicable operator that reaches a landmark, all such operators are preferred.
  2. If no such operators exist, perform an FF-style relaxed exploration towards the nearest landmarks (according to the landmark orderings) and use the preferred operators of this exploration.

Our implementation only considers preferred operators of the first type and does not include the second type. The rationale for this change is that it reduces code complexity and helps more cleanly separate landmark-based and FF-based computations in LAMA-like planner configurations. In our experiments, only considering preferred operators of the first type reduces performance when using the heuristic and its preferred operators in isolation but improves performance when using this heuristic in conjunction with the FF heuristic, as in LAMA-like planner configurations.

Supported language features:

  • action costs: supported
  • conditional_effects: supported if the LandmarkFactory supports them; otherwise ignored
  • axioms: ignored

Properties:

  • preferred operators: yes (if enabled; see pref option)
  • admissible: no
  • consistent: no
  • safe: yes except on tasks with axioms or on tasks with conditional effects when using a LandmarkFactory not supporting them
"},{"location":"Evaluator/#landmark-cut_heuristic","title":"Landmark-cut heuristic","text":"
lmcut(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates

Supported language features:

  • action costs: supported
  • conditional effects: not supported
  • axioms: not supported

Properties:

  • admissible: yes
  • consistent: no
  • safe: yes
  • preferred operators: no
"},{"location":"Evaluator/#merge-and-shrink_heuristic","title":"Merge-and-shrink heuristic","text":"

This heuristic implements the algorithm described in the following paper:

  • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

For a more exhaustive description of merge-and-shrink, see the journal paper

  • Silvan Sievers and Malte Helmert. Merge-and-Shrink: A Compositional Theory of Transformations of Factored Transition Systems. Journal of Artificial Intelligence Research 71:781-883. 2021.

The following paper describes how to improve the DFP merge strategy with tie-breaking, and presents two new merge strategies (dyn-MIASM and SCC-DFP):

  • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Automated Planning and Scheduling (ICAPS 2016), pp. 294-298. AAAI Press, 2016.

Details of the algorithms and the implementation are described in the paper

  • Silvan Sievers. Merge-and-Shrink Heuristics for Classical Planning: Efficient Implementation and Partial Abstractions. In Proceedings of the 11th Annual Symposium on Combinatorial Search (SoCS 2018), pp. 90-98. AAAI Press, 2018.

    merge_and_shrink(verbosity=normal, transform=no_transform(), cache_estimates=true, merge_strategy, shrink_strategy, label_reduction=, prune_unreachable_states=true, prune_irrelevant_states=true, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, main_loop_max_time=infinity)

  • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

    • silent: only the most basic output
    • normal: relevant information to monitor progress
    • verbose: full output
    • debug: like verbose with additional debug output
  • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
  • cache_estimates (bool): cache heuristic estimates
  • merge_strategy (MergeStrategy): See detailed documentation for merge strategies. We currently recommend SCC-DFP, which can be achieved using merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order]))
  • shrink_strategy (ShrinkStrategy): See detailed documentation for shrink strategies. We currently recommend non-greedy shrink_bisimulation, which can be achieved using shrink_strategy=shrink_bisimulation(greedy=false)
  • label_reduction (LabelReduction): See detailed documentation for labels. There is currently only one 'option' to use label_reduction, which is label_reduction=exact Also note the interaction with shrink strategies.
  • prune_unreachable_states (bool): If true, prune abstract states unreachable from the initial state.
  • prune_irrelevant_states (bool): If true, prune abstract states from which no goal state can be reached.
  • max_states (int [-1, infinity]): maximum transition system size allowed at any time point.
  • max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.
  • threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.
  • main_loop_max_time (double [0.0, infinity]): A limit in seconds on the runtime of the main loop of the algorithm. If the limit is exceeded, the algorithm terminates, potentially returning a factored transition system with several factors. Also note that the time limit is only checked between transformations of the main loop, but not during, so it can be exceeded if a transformation is runtime-intense.
  • Note: Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic transition systems on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when only perfect shrinking is performed.

    Note: When pruning unreachable states, admissibility and consistency is only guaranteed for reachable states and transitions between reachable states. While this does not impact regular A* search which will never encounter any unreachable state, it impacts techniques like symmetry-based pruning: a reachable state which is mapped to an unreachable symmetric state (which hence is pruned) would falsely be considered a dead-end and also be pruned, thus violating optimality of the search.

    Note: When using a time limit on the main loop of the merge-and-shrink algorithm, the heuristic will compute the maximum over all heuristics induced by the remaining factors if terminating the merge-and-shrink algorithm early. Exception: if there is an unsolvable factor, it will be used as the exclusive heuristic since the problem is unsolvable.

    Note: A currently recommended good configuration uses bisimulation based shrinking, the merge strategy SCC-DFP, and the appropriate label reduction setting (max_states has been altered to be between 10k and 200k in the literature). As merge-and-shrink heuristics can be expensive to compute, we also recommend limiting time by setting main_loop_max_time to a finite value. A sensible value would be half of the time allocated for the planner.

    merge_and_shrink(shrink_strategy=shrink_bisimulation(greedy=false),merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance(),dfp(),total_order()])),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50k,threshold_before_merge=1)\n

    Supported language features:

    • action costs: supported
    • conditional effects: supported (but see note)
    • axioms: not supported

    Properties:

    • admissible: yes (but see note)
    • consistent: yes (but see note)
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#operator-counting_heuristic","title":"Operator-counting heuristic","text":"

    An operator-counting heuristic computes a linear program (LP) in each state. The LP has one variable Count_o for each operator o that represents how often the operator is used in a plan. Operator-counting constraints are linear constraints over these varaibles that are guaranteed to have a solution with Count_o = occurrences(o, pi) for every plan pi. Minimizing the total cost of operators subject to some operator-counting constraints is an admissible heuristic. For details, see

    • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

      operatorcounting(constraint_generators, use_integer_operator_counts=false, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

    • constraint_generators (list of ConstraintGenerator): methods that generate constraints over operator-counting variables

    • use_integer_operator_counts (bool): restrict operator-counting variables to integer values. Computing the heuristic with integer variables can produce higher values but requires solving a MIP instead of an LP which is generally more computationally expensive. Turning this option on can thus drastically increase the runtime.
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)

    Properties:

    • admissible: yes
    • consistent: yes, if all constraint generators represent consistent heuristics
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#basic_evaluators","title":"Basic Evaluators","text":""},{"location":"Evaluator/#constant_evaluator","title":"Constant evaluator","text":"

    Returns a constant value.

    const(value=1, verbosity=normal)\n
    • value (int [0, infinity]): the constant value
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#g-value_evaluator","title":"g-value evaluator","text":"

    Returns the g-value (path cost) of the search node.

    g(verbosity=normal)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#max_evaluator","title":"Max evaluator","text":"

    Calculates the maximum of the sub-evaluators.

    max(evals, verbosity=normal)\n
    • evals (list of Evaluator): at least one evaluator
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#preference_evaluator","title":"Preference evaluator","text":"

    Returns 0 if preferred is true and 1 otherwise.

    pref(verbosity=normal)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#sum_evaluator","title":"Sum evaluator","text":"

    Calculates the sum of the sub-evaluators.

    sum(evals, verbosity=normal)\n
    • evals (list of Evaluator): at least one evaluator
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#weighted_evaluator","title":"Weighted evaluator","text":"

    Multiplies the value of the evaluator with the given weight.

    weight(eval, weight, verbosity=normal)\n
    • eval (Evaluator): evaluator
    • weight (int): weight
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/#cost_partitioning_heuristics","title":"Cost Partitioning Heuristics","text":""},{"location":"Evaluator/#canonical_heuristic_over_abstractions","title":"Canonical heuristic over abstractions","text":"

    Shuffle abstractions randomly.

    canonical_heuristic(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • abstractions (list of AbstractionGenerator): abstraction generators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#greedy_zero-one_cost_partitioning","title":"Greedy zero-one cost partitioning","text":"
    gzocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)\n
    • abstractions (list of AbstractionGenerator): abstraction generators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • orders (OrderGenerator): order generator
    • max_orders (int [0, infinity]): maximum number of orders
    • max_size (int [0, infinity]): maximum heuristic size in KiB
    • max_time (double [0, infinity]): maximum time in seconds for finding orders
    • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
    • samples (int [1, infinity]): number of samples for diversification
    • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#maximum_over_abstractions","title":"Maximum over abstractions","text":"

    Maximize over a set of abstraction heuristics.

    maximize(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • abstractions (list of AbstractionGenerator): abstraction generators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#optimal_cost_partitioning_heuristic","title":"Optimal cost partitioning heuristic","text":"

    Compute an optimal cost partitioning for each evaluated state.

    ocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, lpsolver=cplex, allow_negative_costs=true)\n
    • abstractions (list of AbstractionGenerator): abstraction generators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • allow_negative_costs (bool): use general instead of non-negative cost partitioning

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#post-hoc_optimization_heuristic","title":"Post-hoc optimization heuristic","text":"

    Compute the maximum over multiple PhO heuristics precomputed offline.

    pho(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturated=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, lpsolver=cplex)\n
    • abstractions (list of AbstractionGenerator): abstraction generators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • saturated (bool): saturate costs
    • orders (OrderGenerator): order generator
    • max_orders (int [0, infinity]): maximum number of orders
    • max_size (int [0, infinity]): maximum heuristic size in KiB
    • max_time (double [0, infinity]): maximum time in seconds for finding orders
    • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
    • samples (int [1, infinity]): number of samples for diversification
    • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#saturated_cost_partitioning","title":"Saturated cost partitioning","text":"

    Compute the maximum over multiple saturated cost partitioning heuristics using different orders. For details, see

    • Jendrik Seipp, Thomas Keller and Malte Helmert. Saturated Cost Partitioning for Optimal Classical Planning. Journal of Artificial Intelligence Research 67:129-167. 2020.

      scp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)

    • abstractions (list of AbstractionGenerator): abstraction generators

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • saturator ({all, perim, perimstar}): function that computes saturated cost functions
      • all: preserve estimates of all states
      • perim: preserve estimates of states in perimeter around goal
      • perimstar: compute 'perim' first and then 'all' with remaining costs
    • orders (OrderGenerator): order generator
    • max_orders (int [0, infinity]): maximum number of orders
    • max_size (int [0, infinity]): maximum heuristic size in KiB
    • max_time (double [0, infinity]): maximum time in seconds for finding orders
    • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
    • samples (int [1, infinity]): number of samples for diversification
    • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Difference to cegar(): The cegar() plugin computes a single saturated cost partitioning over Cartesian abstraction heuristics. In contrast, saturated_cost_partitioning() supports computing the maximum over multiple saturated cost partitionings using different heuristic orders, and it supports both Cartesian abstraction heuristics and pattern database heuristics. While cegar() interleaves abstraction computation with cost partitioning, saturated_cost_partitioning() computes all abstractions using the original costs.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#online_saturated_cost_partitioning","title":"Online saturated cost partitioning","text":"

    Compute the maximum over multiple saturated cost partitioning heuristics diversified during the search. For details, see

    • Jendrik Seipp. Online Saturated Cost Partitioning for Classical Planning. In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 317-321. AAAI Press, 2021.

      scp_online(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_size=infinity, max_time=200, interval=10K, debug=false, random_seed=-1)

    • abstractions (list of AbstractionGenerator): abstraction generators

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • saturator ({all, perim, perimstar}): function that computes saturated cost functions
      • all: preserve estimates of all states
      • perim: preserve estimates of states in perimeter around goal
      • perimstar: compute 'perim' first and then 'all' with remaining costs
    • orders (OrderGenerator): order generator
    • max_size (int [0, infinity]): maximum (estimated) heuristic size in KiB
    • max_time (double [0, infinity]): maximum time in seconds for finding orders
    • interval (int [1, infinity]): select every i-th evaluated state for online diversification
    • debug (bool): print debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: no
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#opportunistic_uniform_cost_partitioning","title":"(Opportunistic) uniform cost partitioning","text":"
    • Jendrik Seipp, Thomas Keller and Malte Helmert. A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning. In Proceedings of the Twenty-Seventh International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 259-268. AAAI Press, 2017.

      ucp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, opportunistic=false, debug=false)

    • abstractions (list of AbstractionGenerator): abstraction generators

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • orders (OrderGenerator): order generator
    • max_orders (int [0, infinity]): maximum number of orders
    • max_size (int [0, infinity]): maximum heuristic size in KiB
    • max_time (double [0, infinity]): maximum time in seconds for finding orders
    • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
    • samples (int [1, infinity]): number of samples for diversification
    • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • opportunistic (bool): recalculate uniform cost partitioning after each considered abstraction
    • debug (bool): print debugging messages

    Supported language features:

    • action costs: supported
    • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
    • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#pattern_database_heuristics","title":"Pattern Database Heuristics","text":""},{"location":"Evaluator/#canonical_pdb","title":"Canonical PDB","text":"

    The canonical pattern database heuristic is calculated as follows. For a given pattern collection C, the value of the canonical heuristic function is the maximum over all maximal additive subsets A in C, where the value for one subset S in A is the sum of the heuristic values for all patterns in S for a given state.

    cpdbs(patterns=systematic(1), max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • patterns (PatternCollectionGenerator): pattern generation method
    • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#ipdb","title":"iPDB","text":"

    This approach is a combination of using the Canonical PDB heuristic over patterns computed with the hillclimbing algorithm for pattern generation. It is a short-hand for the command-line option cpdbs(hillclimbing()). Both the heuristic and the pattern generation algorithm are described in the following paper:

    • Patrik Haslum, Adi Botea, Malte Helmert, Blai Bonet and Sven Koenig. Domain-Independent Construction of Pattern Database Heuristics for Cost-Optimal Planning. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 1007-1012. AAAI Press, 2007.

    For implementation notes, see:

    • Silvan Sievers, Manuela Ortlieb and Malte Helmert. Efficient Implementation of Pattern Database Heuristics for Classical Planning. In Proceedings of the Fifth Annual Symposium on Combinatorial Search (SoCS 2012), pp. 105-111. AAAI Press, 2012.

    See also Canonical PDB and Hill climbing for more details.

    ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, max_generated_patterns=infinity, random_seed=-1, max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • pdb_max_size (int [1, infinity]): maximal number of states per pattern database
    • collection_max_size (int [1, infinity]): maximal number of states in the pattern collection
    • num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection
    • min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection
    • max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.
    • max_generated_patterns (int [0, infinity]): maximum number of generated patterns
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

    Note: This pattern generation method generates patterns optimized for use with the canonical pattern database heuristic.

    "},{"location":"Evaluator/#implementation_notes","title":"Implementation Notes","text":"

    The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

    The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

    The algorithm is basically a local search (hill climbing) which searches the \"pattern neighbourhood\" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section \"pattern construction as search\" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the \"counting approximation\" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

    Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

    The section \"avoiding redundant evaluations\" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#pattern_database_heuristic","title":"Pattern database heuristic","text":"

    TODO

    pdb(pattern=greedy(), verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • pattern (PatternGenerator): pattern generation method
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#zero-one_pdb","title":"Zero-One PDB","text":"

    The zero/one pattern database heuristic is simply the sum of the heuristic values of all patterns in the pattern collection. In contrast to the canonical pattern database heuristic, there is no need to check for additive subsets, because the additivity of the patterns is guaranteed by action cost partitioning. This heuristic uses the most simple form of action cost partitioning, i.e. if an operator affects more than one pattern in the collection, its costs are entirely taken into account for one pattern (the first one which it affects) and set to zero for all other affected patterns.

    zopdbs(patterns=systematic(1), verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • patterns (PatternCollectionGenerator): pattern generation method
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#potential_heuristics","title":"Potential Heuristics","text":""},{"location":"Evaluator/#potential_heuristic_optimized_for_all_states","title":"Potential heuristic optimized for all states","text":"

    The algorithm is based on

    • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

      all_states_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

    • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#diverse_potential_heuristics","title":"Diverse potential heuristics","text":"

    The algorithm is based on

    • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

      diverse_potentials(num_samples=1000, max_num_heuristics=infinity, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

    • num_samples (int [0, infinity]): Number of states to sample

    • max_num_heuristics (int [0, infinity]): maximum number of potential heuristics
    • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#potential_heuristic_optimized_for_initial_state","title":"Potential heuristic optimized for initial state","text":"

    The algorithm is based on

    • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

      initial_state_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

    • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#sample-based_potential_heuristics","title":"Sample-based potential heuristics","text":"

    Maximum over multiple potential heuristics optimized for samples. The algorithm is based on

    • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

      sample_based_potentials(num_heuristics=1, num_samples=1000, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

    • num_heuristics (int [0, infinity]): number of potential heuristics

    • num_samples (int [0, infinity]): Number of states to sample
    • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"LabelReduction/","title":"LabelReduction","text":"

    This page describes the current single 'option' for label reduction.

    "},{"location":"LabelReduction/#exact_generalized_label_reduction","title":"Exact generalized label reduction","text":"

    This class implements the exact generalized label reduction described in the following paper:

    • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

      exact(before_shrinking, before_merging, method=all_transition_systems_with_fixpoint, system_order=random, random_seed=-1)

    • before_shrinking (bool): apply label reduction before shrinking

    • before_merging (bool): apply label reduction before merging
    • method ({two_transition_systems, all_transition_systems, all_transition_systems_with_fixpoint}): Label reduction method. See the AAAI14 paper by Sievers et al. for explanation of the default label reduction method and the 'combinable relation' .Also note that you must set at least one of the options reduce_labels_before_shrinking or reduce_labels_before_merging in order to use the chosen label reduction configuration.
      • two_transition_systems: compute the 'combinable relation' only for the two transition systems being merged next
      • all_transition_systems: compute the 'combinable relation' for labels once for every transition system and reduce labels
      • all_transition_systems_with_fixpoint: keep computing the 'combinable relation' for labels iteratively for all transition systems until no more labels can be reduced
    • system_order ({regular, reverse, random}): Order of transition systems for the label reduction methods that iterate over the set of all transition systems. Only useful for the choices all_transition_systems and all_transition_systems_with_fixpoint for the option label_reduction_method.
      • regular: transition systems are considered in the order given in the planner input if atomic and in the order of their creation if composite.
      • reverse: inverse of regular
      • random: random order
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"LandmarkFactory/","title":"LandmarkFactory","text":"

    A landmark factory specification is either a newly created instance or a landmark factory that has been defined previously. This page describes how one can specify a new landmark factory instance. For re-using landmark factories, see OptionSyntax#Landmark_Predefinitions.

    This feature type can be bound to variables using let(variable_name, variable_definition, expression) where expression can use variable_name. Predefinitions using --evaluator, --heuristic, and --landmarks are automatically transformed into let-expressions but are deprecated.

    "},{"location":"LandmarkFactory/#exhaustive_landmarks","title":"Exhaustive Landmarks","text":"

    Exhaustively checks for each fact if it is a landmark.This check is done using relaxed planning.

    lm_exhaust(verbosity=normal, only_causal_landmarks=false)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • only_causal_landmarks (bool): keep only causal landmarks

    Supported language features:

    • conditional_effects: ignored, i.e. not supported
    "},{"location":"LandmarkFactory/#hm_landmarks","title":"h^m Landmarks","text":"

    The landmark generation method introduced by Keyder, Richter & Helmert (ECAI 2010).

    lm_hm(m=2, conjunctive_landmarks=true, verbosity=normal, use_orders=true)\n
    • m (int): subset size (if unsure, use the default of 2)
    • conjunctive_landmarks (bool): keep conjunctive landmarks
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • use_orders (bool): use orders between landmarks

    Supported language features:

    • conditional_effects: ignored, i.e. not supported
    "},{"location":"LandmarkFactory/#merged_landmarks","title":"Merged Landmarks","text":"

    Merges the landmarks and orderings from the parameter landmarks

    lm_merged(lm_factories, verbosity=normal)\n
    • lm_factories (list of LandmarkFactory):
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Precedence: Fact landmarks take precedence over disjunctive landmarks, orderings take precedence in the usual manner (gn > nat > reas > o_reas).

    Note: Does not currently support conjunctive landmarks

    Supported language features:

    • conditional_effects: supported if all components support them
    "},{"location":"LandmarkFactory/#hps_orders","title":"HPS Orders","text":"

    Adds reasonable orders described in the following paper

    • J\u00f6rg Hoffmann, Julie Porteous and Laura Sebastia. Ordered Landmarks in Planning. Journal of Artificial Intelligence Research 22:215-278. 2004.

      lm_reasonable_orders_hps(lm_factory, verbosity=normal)

    • lm_factory (LandmarkFactory):

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Obedient-reasonable orders: Hoffmann et al. (2004) suggest obedient-reasonable orders in addition to reasonable orders. Obedient-reasonable orders were later also used by the LAMA planner (Richter and Westphal, 2010). They are \"reasonable orders\" under the assumption that all (non-obedient) reasonable orders are actually \"natural\", i.e., every plan obeys the reasonable orders. We observed experimentally that obedient-reasonable orders have minimal effect on the performance of LAMA (B\u00fcchner et al., 2023) and decided to remove them in issue1089.

    Supported language features:

    • conditional_effects: supported if subcomponent supports them
    "},{"location":"LandmarkFactory/#rhw_landmarks","title":"RHW Landmarks","text":"

    The landmark generation method introduced by Richter, Helmert and Westphal (AAAI 2008).

    lm_rhw(disjunctive_landmarks=true, verbosity=normal, use_orders=true, only_causal_landmarks=false)\n
    • disjunctive_landmarks (bool): keep disjunctive landmarks
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • use_orders (bool): use orders between landmarks
    • only_causal_landmarks (bool): keep only causal landmarks

    Supported language features:

    • conditional_effects: supported
    "},{"location":"LandmarkFactory/#zhugivan_landmarks","title":"Zhu/Givan Landmarks","text":"

    The landmark generation method introduced by Zhu & Givan (ICAPS 2003 Doctoral Consortium).

    lm_zg(verbosity=normal, use_orders=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • use_orders (bool): use orders between landmarks

    Supported language features:

    • conditional_effects: We think they are supported, but this is not 100% sure.
    "},{"location":"MergeScoringFunction/","title":"MergeScoringFunction","text":"

    This page describes various merge scoring functions. A scoring function, given a list of merge candidates and a factored transition system, computes a score for each candidate based on this information and potentially some chosen options. Minimal scores are considered best. Scoring functions are currently only used within the score based filtering merge selector.

    "},{"location":"MergeScoringFunction/#dfp_scoring","title":"DFP scoring","text":"

    This scoring function computes the 'DFP' score as descrdibed in the paper \"Directed model checking with distance-preserving abstractions\" by Draeger, Finkbeiner and Podelski (SPIN 2006), adapted to planning in the following paper:

    • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

      dfp()

    Note: To obtain the configurations called DFP-B-50K described in the paper, use the following configuration of the merge-and-shrink heuristic and adapt the tie-breaking criteria of total_order as desired:

    merge_and_shrink(merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order(atomic_ts_order=reverse_level,product_ts_order=new_to_old,atomic_before_product=true)])),shrink_strategy=shrink_bisimulation(greedy=false),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50000,threshold_before_merge=1)\n
    "},{"location":"MergeScoringFunction/#goal_relevance_scoring","title":"Goal relevance scoring","text":"

    This scoring function assigns a merge candidate a value of 0 iff at least one of the two transition systems of the merge candidate is goal relevant in the sense that there is an abstract non-goal state. All other candidates get a score of positive infinity.

    goal_relevance()\n
    "},{"location":"MergeScoringFunction/#miasm","title":"MIASM","text":"

    This scoring function favors merging transition systems such that in their product, there are many dead states, which can then be pruned without sacrificing information. In particular, the score it assigns to a product is the ratio of alive states to the total number of states. To compute this score, this class thus computes the product of all pairs of transition systems, potentially copying and shrinking the transition systems before if otherwise their product would exceed the specified size limits. A stateless merge strategy using this scoring function is called dyn-MIASM (nowadays also called sbMIASM for score-based MIASM) and is described in the following paper:

    • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Planning and Scheduling (ICAPS 2016), pp. 2358-2366. AAAI Press, 2016.

      sf_miasm(shrink_strategy, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, use_caching=true)

    • shrink_strategy (ShrinkStrategy): We recommend setting this to match the shrink strategy configuration given to merge_and_shrink, see note below.

    • max_states (int [-1, infinity]): maximum transition system size allowed at any time point.
    • max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.
    • threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.
    • use_caching (bool): Cache scores for merge candidates. IMPORTANT! This only works under the assumption that the merge-and-shrink algorithm only uses exact label reduction and does not (non-exactly) shrink factors other than those being merged in the current iteration. In this setting, the MIASM score of a merge candidate is constant over merge-and-shrink iterations. If caching is enabled, only the scores for the new merge candidates need to be computed.

    Note: To obtain the configurations called dyn-MIASM described in the paper, use the following configuration of the merge-and-shrink heuristic and adapt the tie-breaking criteria of total_order as desired:

    merge_and_shrink(merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[sf_miasm(shrink_strategy=shrink_bisimulation(greedy=false),max_states=50000,threshold_before_merge=1),total_order(atomic_ts_order=reverse_level,product_ts_order=new_to_old,atomic_before_product=true)])),shrink_strategy=shrink_bisimulation(greedy=false),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50000,threshold_before_merge=1)\n

    Note: Unless you know what you are doing, we recommend using the same options related to shrinking for sf_miasm as for merge_and_shrink, i.e. the options shrink_strategy, max_states, and threshold_before_merge should be set identically. Furthermore, as this scoring function maximizes the amount of possible pruning, merge-and-shrink should be configured to use full pruning, i.e. prune_unreachable_states=true and prune_irrelevant_states=true (the default).

    "},{"location":"MergeScoringFunction/#single_random","title":"Single random","text":"

    This scoring function assigns exactly one merge candidate a score of 0, chosen randomly, and infinity to all others.

    single_random(random_seed=-1)\n
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"MergeScoringFunction/#total_order","title":"Total order","text":"

    This scoring function computes a total order on the merge candidates, based on the specified options. The score for each merge candidate correponds to its position in the order. This scoring function is mainly intended as tie-breaking, and has been introduced in the following paper:

    • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Automated Planning and Scheduling (ICAPS 2016), pp. 294-298. AAAI Press, 2016.

    Furthermore, using the atomic_ts_order option, this scoring function, if used alone in a score based filtering merge selector, can be used to emulate the corresponding (precomputed) linear merge strategies reverse level/level (independently of the other options).

    total_order(atomic_ts_order=reverse_level, product_ts_order=new_to_old, atomic_before_product=false, random_seed=-1)\n
    • atomic_ts_order ({reverse_level, level, random}): The order in which atomic transition systems are considered when considering pairs of potential merges.
      • reverse_level: the variable order of Fast Downward
      • level: opposite of reverse_level
      • random: a randomized order
    • product_ts_order ({old_to_new, new_to_old, random}): The order in which product transition systems are considered when considering pairs of potential merges.
      • old_to_new: consider composite transition systems from oldest to most recent
      • new_to_old: opposite of old_to_new
      • random: a randomized order
    • atomic_before_product (bool): Consider atomic transition systems before composite ones iff true.
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"MergeSelector/","title":"MergeSelector","text":"

    This page describes the available merge selectors. They are used to compute the next merge purely based on the state of the given factored transition system. They are used in the merge strategy of type 'stateless', but they can also easily be used in different 'combined' merged strategies.

    "},{"location":"MergeSelector/#score_based_filtering_merge_selector","title":"Score based filtering merge selector","text":"

    This merge selector has a list of scoring functions, which are used iteratively to compute scores for merge candidates, keeping the best ones (with minimal scores) until only one is left.

    score_based_filtering(scoring_functions)\n
    • scoring_functions (list of MergeScoringFunction): The list of scoring functions used to compute scores for candidates.
    "},{"location":"MergeStrategy/","title":"MergeStrategy","text":"

    This page describes the various merge strategies supported by the planner.

    "},{"location":"MergeStrategy/#precomputed_merge_strategy","title":"Precomputed merge strategy","text":"

    This merge strategy has a precomputed merge tree. Note that this merge strategy does not take into account the current state of the factored transition system. This also means that this merge strategy relies on the factored transition system being synchronized with this merge tree, i.e. all merges are performed exactly as given by the merge tree.

    merge_precomputed(merge_tree, verbosity=normal)\n
    • merge_tree (MergeTree): The precomputed merge tree.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note: An example of a precomputed merge startegy is a linear merge strategy, which can be obtained using:

    merge_strategy=merge_precomputed(merge_tree=linear(<variable_order>))\n
    "},{"location":"MergeStrategy/#merge_strategy_sscs","title":"Merge strategy SSCs","text":"

    This merge strategy implements the algorithm described in the paper

    • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Planning and Scheduling (ICAPS 2016), pp. 2358-2366. AAAI Press, 2016.

    In a nutshell, it computes the maximal SCCs of the causal graph, obtaining a partitioning of the task's variables. Every such partition is then merged individually, using the specified fallback merge strategy, considering the SCCs in a configurable order. Afterwards, all resulting composite abstractions are merged to form the final abstraction, again using the specified fallback merge strategy and the configurable order of the SCCs.

    merge_sccs(order_of_sccs=topological, merge_tree=<none>, merge_selector=<none>, verbosity=normal)\n
    • order_of_sccs ({topological, reverse_topological, decreasing, increasing}): how the SCCs should be ordered
      • topological: according to the topological ordering of the directed graph where each obtained SCC is a 'supervertex'
      • reverse_topological: according to the reverse topological ordering of the directed graph where each obtained SCC is a 'supervertex'
      • decreasing: biggest SCCs first, using 'topological' as tie-breaker
      • increasing: smallest SCCs first, using 'topological' as tie-breaker
    • merge_tree (MergeTree): the fallback merge strategy to use if a precomputed strategy should be used.
    • merge_selector (MergeSelector): the fallback merge strategy to use if a stateless strategy should be used.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"MergeStrategy/#stateless_merge_strategy","title":"Stateless merge strategy","text":"

    This merge strategy has a merge selector, which computes the next merge only depending on the current state of the factored transition system, not requiring any additional information.

    merge_stateless(merge_selector, verbosity=normal)\n
    • merge_selector (MergeSelector): The merge selector to be used.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note: Examples include the DFP merge strategy, which can be obtained using:

    merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order(<order_option>))]))\n

    and the (dynamic/score-based) MIASM strategy, which can be obtained using:

    merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[sf_miasm(<shrinking_options>),total_order(<order_option>)]\n
    "},{"location":"MergeTree/","title":"MergeTree","text":"

    This page describes the available merge trees that can be used to precompute a merge strategy, either for the entire task or a given subset of transition systems of a given factored transition system. Merge trees are typically used in the merge strategy of type 'precomputed', but they can also be used as fallback merge strategies in 'combined' merge strategies.

    "},{"location":"MergeTree/#linear_merge_trees","title":"Linear merge trees","text":"

    These merge trees implement several linear merge orders, which are described in the paper:

    • Malte Helmert, Patrik Haslum and Joerg Hoffmann. Flexible Abstraction Heuristics for Optimal Sequential Planning. In Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS 2007), pp. 176-183. AAAI Press, 2007.

      linear(random_seed=-1, update_option=use_random, variable_order=cg_goal_level)

    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    • update_option ({use_first, use_second, use_random}): When the merge tree is used within another merge strategy, how should it be updated when a merge different to a merge from the tree is performed.
      • use_first: the node representing the index that would have been merged earlier survives
      • use_second: the node representing the index that would have been merged later survives
      • use_random: a random node (of the above two) survives
    • variable_order ({cg_goal_level, cg_goal_random, goal_cg_level, random, level, reverse_level}): the order in which atomic transition systems are merged
      • cg_goal_level: variables are prioritized first if they have an arc to a previously added variable, second if their goal value is defined and third according to their level in the causal graph
      • cg_goal_random: variables are prioritized first if they have an arc to a previously added variable, second if their goal value is defined and third randomly
      • goal_cg_level: variables are prioritized first if their goal value is defined, second if they have an arc to a previously added variable, and third according to their level in the causal graph
      • random: variables are ordered randomly
      • level: variables are ordered according to their level in the causal graph
      • reverse_level: variables are ordered reverse to their level in the causal graph
    "},{"location":"OpenList/","title":"OpenList","text":""},{"location":"OpenList/#alternation_open_list","title":"Alternation open list","text":"

    alternates between several open lists.

    alt(sublists, boost=0)\n
    • sublists (list of OpenList): open lists between which this one alternates
    • boost (int): boost value for contained open lists that are restricted to preferred successors
    "},{"location":"OpenList/#epsilon-greedy_open_list","title":"Epsilon-greedy open list","text":"

    Chooses an entry uniformly randomly with probability 'epsilon', otherwise it returns the minimum entry. The algorithm is based on

    • Richard Valenzano, Nathan R. Sturtevant, Jonathan Schaeffer and Fan Xie. A Comparison of Knowledge-Based GBFS Enhancements and Knowledge-Free Exploration. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 375-379. AAAI Press, 2014.

      epsilon_greedy(eval, pref_only=false, epsilon=0.2, random_seed=-1)

    • eval (Evaluator): evaluator

    • pref_only (bool): insert only nodes generated by preferred operators
    • epsilon (double [0.0, 1.0]): probability for choosing the next entry randomly
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"OpenList/#pareto_open_list","title":"Pareto open list","text":"

    Selects one of the Pareto-optimal (regarding the sub-evaluators) entries for removal.

    pareto(evals, pref_only=false, state_uniform_selection=false, random_seed=-1)\n
    • evals (list of Evaluator): evaluators
    • pref_only (bool): insert only nodes generated by preferred operators
    • state_uniform_selection (bool): When removing an entry, we select a non-dominated bucket and return its oldest entry. If this option is false, we select uniformly from the non-dominated buckets; if the option is true, we weight the buckets with the number of entries.
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"OpenList/#best-first_open_list","title":"Best-first open list","text":"

    Open list that uses a single evaluator and FIFO tiebreaking.

    single(eval, pref_only=false)\n
    • eval (Evaluator): evaluator
    • pref_only (bool): insert only nodes generated by preferred operators

    Implementation Notes: Elements with the same evaluator value are stored in double-ended queues, called \"buckets\". The open list stores a map from evaluator values to buckets. Pushing and popping from a bucket runs in constant time. Therefore, inserting and removing an entry from the open list takes time O(log(n)), where n is the number of buckets.

    "},{"location":"OpenList/#tie-breaking_open_list","title":"Tie-breaking open list","text":"
    tiebreaking(evals, pref_only=false, unsafe_pruning=true)\n
    • evals (list of Evaluator): evaluators
    • pref_only (bool): insert only nodes generated by preferred operators
    • unsafe_pruning (bool): allow unsafe pruning when the main evaluator regards a state a dead end
    "},{"location":"OpenList/#type-based_open_list","title":"Type-based open list","text":"

    Uses multiple evaluators to assign entries to buckets. All entries in a bucket have the same evaluator values. When retrieving an entry, a bucket is chosen uniformly at random and one of the contained entries is selected uniformly randomly. The algorithm is based on

    • Fan Xie, Martin Mueller, Robert Holte and Tatsuya Imai. Type-Based Exploration with Multiple Search Queues for Satisficing Planning. In Proceedings of the Twenty-Eigth AAAI Conference Conference on Artificial Intelligence (AAAI 2014), pp. 2395-2401. AAAI Press, 2014.

      type_based(evaluators, random_seed=-1)

    • evaluators (list of Evaluator): Evaluators used to determine the bucket for each entry.

    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"OrderGenerator/","title":"OrderGenerator","text":"

    Order abstractions for saturated cost partitioning.

    "},{"location":"OrderGenerator/#dynamic_greedy_orders","title":"Dynamic greedy orders","text":"

    Order abstractions greedily by a given scoring function, dynamically recomputing the next best abstraction after each ordering step.

    dynamic_greedy_orders(scoring_function=max_heuristic_per_stolen_costs, random_seed=-1)\n
    • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
      • max_heuristic: order by decreasing heuristic value for the given state
      • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
      • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"OrderGenerator/#cost_partitioning_heuristics","title":"Cost Partitioning Heuristics","text":""},{"location":"OrderGenerator/#greedy_orders","title":"Greedy orders","text":"

    Order abstractions greedily by a given scoring function.

    greedy_orders(scoring_function=max_heuristic_per_stolen_costs, random_seed=-1)\n
    • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
      • max_heuristic: order by decreasing heuristic value for the given state
      • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
      • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"OrderGenerator/#random_orders","title":"Random orders","text":"

    Shuffle abstractions randomly.

    random_orders(random_seed=-1)\n
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"PatternCollectionGenerator/","title":"PatternCollectionGenerator","text":"

    Factory for pattern collections

    "},{"location":"PatternCollectionGenerator/#combo","title":"combo","text":"
    combo(max_states=1000000, verbosity=normal)\n
    • max_states (int [1, infinity]): maximum abstraction size for combo strategy
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternCollectionGenerator/#disjoint_cegar","title":"Disjoint CEGAR","text":"

    This pattern collection generator uses the CEGAR algorithm to compute a pattern for the planning task. See below for a description of the algorithm and some implementation notes. The original algorithm (called single CEGAR) is described in the paper

    • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

      disjoint_cegar(max_pdb_size=1000000, max_collection_size=10000000, max_time=infinity, use_wildcard_plans=true, verbosity=normal, random_seed=-1)

    • max_pdb_size (int [1, infinity]): maximum number of states per pattern database (ignored for the initial collection consisting of a singleton pattern for each goal variable)

    • max_collection_size (int [1, infinity]): maximum number of states in the pattern collection (ignored for the initial collection consisting of a singleton pattern for each goal variable)
    • max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator (ignored for computing the initial collection consisting of a singleton pattern for each goal variable)
    • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"PatternCollectionGenerator/#short_description_of_the_cegar_algorithm","title":"Short description of the CEGAR algorithm","text":"

    The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

    "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_cegar_algorithm","title":"Implementation notes about the CEGAR algorithm","text":"

    The following describes differences of the implementation to the original implementation used and described in the paper.

    Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

    In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

    Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

    Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

    "},{"location":"PatternCollectionGenerator/#genetic_algorithm_patterns","title":"Genetic Algorithm Patterns","text":"

    The following paper describes the automated creation of pattern databases with a genetic algorithm. Pattern collections are initially created with a bin-packing algorithm. The genetic algorithm is used to optimize the pattern collections with an objective function that estimates the mean heuristic value of the the pattern collections. Pattern collections with higher mean heuristic estimates are more likely selected for the next generation.

    • Stefan Edelkamp. Automated Creation of Pattern Database Search Heuristics. In Proceedings of the 4th Workshop on Model Checking and Artificial Intelligence (!MoChArt 2006), pp. 35-50. AAAI Press, 2007.

      genetic(pdb_max_size=50000, num_collections=5, num_episodes=30, mutation_probability=0.01, disjoint=false, random_seed=-1, verbosity=normal)

    • pdb_max_size (int [1, infinity]): maximal number of states per pattern database

    • num_collections (int [1, infinity]): number of pattern collections to maintain in the genetic algorithm (population size)
    • num_episodes (int [0, infinity]): number of episodes for the genetic algorithm
    • mutation_probability (double [0.0, 1.0]): probability for flipping a bit in the genetic algorithm
    • disjoint (bool): consider a pattern collection invalid (giving it very low fitness) if its patterns are not disjoint
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note: This pattern generation method uses the zero/one pattern database heuristic.

    "},{"location":"PatternCollectionGenerator/#implementation_notes","title":"Implementation Notes","text":"

    The standard genetic algorithm procedure as described in the paper is implemented in Fast Downward. The implementation is close to the paper.

    • InitializationIn Fast Downward bin-packing with the next-fit strategy is used. A bin corresponds to a pattern which contains variables up to pdb_max_size. With this method each variable occurs exactly in one pattern of a collection. There are num_collections collections created.
    • MutationWith probability mutation_probability a bit is flipped meaning that either a variable is added to a pattern or deleted from a pattern.
    • RecombinationRecombination isn't implemented in Fast Downward. In the paper recombination is described but not used.
    • EvaluationFor each pattern collection the mean heuristic value is computed. For a single pattern database the mean heuristic value is the sum of all pattern database entries divided through the number of entries. Entries with infinite heuristic values are ignored in this calculation. The sum of these individual mean heuristic values yield the mean heuristic value of the collection.
    • SelectionThe higher the mean heuristic value of a pattern collection is, the more likely this pattern collection should be selected for the next generation. Therefore the mean heuristic values are normalized and converted into probabilities and Roulette Wheel Selection is used.

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported
    "},{"location":"PatternCollectionGenerator/#hill_climbing","title":"Hill climbing","text":"

    This algorithm uses hill climbing to generate patterns optimized for the Canonical PDB heuristic. It it described in the following paper:

    • Patrik Haslum, Adi Botea, Malte Helmert, Blai Bonet and Sven Koenig. Domain-Independent Construction of Pattern Database Heuristics for Cost-Optimal Planning. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 1007-1012. AAAI Press, 2007.

    For implementation notes, see:

    • Silvan Sievers, Manuela Ortlieb and Malte Helmert. Efficient Implementation of Pattern Database Heuristics for Classical Planning. In Proceedings of the Fifth Annual Symposium on Combinatorial Search (SoCS 2012), pp. 105-111. AAAI Press, 2012.

      hillclimbing(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, max_generated_patterns=infinity, random_seed=-1, verbosity=normal)

    • pdb_max_size (int [1, infinity]): maximal number of states per pattern database

    • collection_max_size (int [1, infinity]): maximal number of states in the pattern collection
    • num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection
    • min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection
    • max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.
    • max_generated_patterns (int [0, infinity]): maximum number of generated patterns
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

    Note: This pattern generation method generates patterns optimized for use with the canonical pattern database heuristic.

    "},{"location":"PatternCollectionGenerator/#implementation_notes_1","title":"Implementation Notes","text":"

    The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

    The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

    The algorithm is basically a local search (hill climbing) which searches the \"pattern neighbourhood\" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section \"pattern construction as search\" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the \"counting approximation\" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

    Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

    The section \"avoiding redundant evaluations\" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

    "},{"location":"PatternCollectionGenerator/#manual_patterns","title":"manual_patterns","text":"
    manual_patterns(patterns, verbosity=normal)\n
    • patterns (list of list of int): list of patterns (which are lists of variable numbers of the planning task).
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternCollectionGenerator/#multiple_cegar","title":"Multiple CEGAR","text":"

    This pattern collection generator implements the multiple CEGAR algorithm described in the paper

    • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

    It is an instantiation of the 'multiple algorithm framework'. To compute a pattern in each iteration, it uses the CEGAR algorithm restricted to a single goal variable. See below for descriptions of the algorithms.

    multiple_cegar(max_pdb_size=1M, max_collection_size=10M, pattern_generation_max_time=infinity, total_max_time=100.0, stagnation_limit=20.0, blacklist_trigger_percentage=0.75, enable_blacklist_on_stagnation=true, verbosity=normal, random_seed=-1, use_wildcard_plans=true)\n
    • max_pdb_size (int [1, infinity]): maximum number of states for each pattern database, computed by compute_pattern (possibly ignored by singleton patterns consisting of a goal variable)
    • max_collection_size (int [1, infinity]): maximum number of states in all pattern databases of the collection (possibly ignored, see max_pdb_size)
    • pattern_generation_max_time (double [0.0, infinity]): maximum time in seconds for each call to the algorithm for computing a single pattern
    • total_max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator. It will always execute at least one iteration, i.e., call the algorithm for computing a single pattern at least once.
    • stagnation_limit (double [1.0, infinity]): maximum time in seconds this pattern generator is allowed to run without generating a new pattern. It terminates prematurely if this limit is hit unless enable_blacklist_on_stagnation is enabled.
    • blacklist_trigger_percentage (double [0.0, 1.0]): percentage of total_max_time after which blacklisting is enabled
    • enable_blacklist_on_stagnation (bool): if true, blacklisting is enabled when stagnation_limit is hit for the first time (unless it was already enabled due to blacklist_trigger_percentage) and pattern generation is terminated when stagnation_limit is hit for the second time. If false, pattern generation is terminated already the first time stagnation_limit is hit.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
    "},{"location":"PatternCollectionGenerator/#short_description_of_the_cegar_algorithm_1","title":"Short description of the CEGAR algorithm","text":"

    The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

    "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_cegar_algorithm_1","title":"Implementation notes about the CEGAR algorithm","text":"

    The following describes differences of the implementation to the original implementation used and described in the paper.

    Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

    In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

    Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

    Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

    "},{"location":"PatternCollectionGenerator/#short_description_of_the_multiple_algorithm_framework","title":"Short description of the 'multiple algorithm framework'","text":"

    This algorithm is a general framework for computing a pattern collection for a given planning task. It requires as input a method for computing a single pattern for the given task and a single goal of the task. The algorithm works as follows. It first stores the goals of the task in random order. Then, it repeatedly iterates over all goals and for each goal, it uses the given method for computing a single pattern. If the pattern is new (duplicate detection), it is kept for the final collection. The algorithm runs until reaching a given time limit. Another parameter allows exiting early if no new patterns are found for a certain time ('stagnation'). Further parameters allow enabling blacklisting for the given pattern computation method after a certain time to force some diversification or to enable said blacklisting when stagnating.

    "},{"location":"PatternCollectionGenerator/#implementation_note_about_the_multiple_algorithm_framework","title":"Implementation note about the 'multiple algorithm framework'","text":"

    A difference compared to the original implementation used in the paper is that the original implementation of stagnation in the multiple CEGAR/RCG algorithms started counting the time towards stagnation only after having generated a duplicate pattern. Now, time towards stagnation starts counting from the start and is reset to the current time only when having found a new pattern or when enabling blacklisting.

    "},{"location":"PatternCollectionGenerator/#multiple_random_patterns","title":"Multiple Random Patterns","text":"

    This pattern collection generator implements the 'multiple randomized causal graph' (mRCG) algorithm described in experiments of the paper

    • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

    It is an instantiation of the 'multiple algorithm framework'. To compute a pattern in each iteration, it uses the random pattern algorithm, called 'single randomized causal graph' (sRCG) in the paper. See below for descriptions of the algorithms.

    random_patterns(max_pdb_size=1M, max_collection_size=10M, pattern_generation_max_time=infinity, total_max_time=100.0, stagnation_limit=20.0, blacklist_trigger_percentage=0.75, enable_blacklist_on_stagnation=true, verbosity=normal, random_seed=-1, bidirectional=true)\n
    • max_pdb_size (int [1, infinity]): maximum number of states for each pattern database, computed by compute_pattern (possibly ignored by singleton patterns consisting of a goal variable)
    • max_collection_size (int [1, infinity]): maximum number of states in all pattern databases of the collection (possibly ignored, see max_pdb_size)
    • pattern_generation_max_time (double [0.0, infinity]): maximum time in seconds for each call to the algorithm for computing a single pattern
    • total_max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator. It will always execute at least one iteration, i.e., call the algorithm for computing a single pattern at least once.
    • stagnation_limit (double [1.0, infinity]): maximum time in seconds this pattern generator is allowed to run without generating a new pattern. It terminates prematurely if this limit is hit unless enable_blacklist_on_stagnation is enabled.
    • blacklist_trigger_percentage (double [0.0, 1.0]): percentage of total_max_time after which blacklisting is enabled
    • enable_blacklist_on_stagnation (bool): if true, blacklisting is enabled when stagnation_limit is hit for the first time (unless it was already enabled due to blacklist_trigger_percentage) and pattern generation is terminated when stagnation_limit is hit for the second time. If false, pattern generation is terminated already the first time stagnation_limit is hit.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • bidirectional (bool): this option decides if the causal graph is considered to be directed or undirected selecting predecessors of already selected variables. If true (default), it is considered to be undirected (precondition-effect edges are bidirectional). If false, it is considered to be directed (a variable is a neighbor only if it is a predecessor.
    "},{"location":"PatternCollectionGenerator/#short_description_of_the_random_pattern_algorithm","title":"Short description of the random pattern algorithm","text":"

    The random pattern algorithm computes a pattern for a given planning task and a single goal of the task as follows. Starting with the given goal variable, the algorithm executes a random walk on the causal graph. In each iteration, it selects a random causal graph neighbor of the current variable. It terminates if no neighbor fits the pattern due to the size limit or if the time limit is reached.

    "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_random_pattern_algorithm","title":"Implementation notes about the random pattern algorithm","text":"

    In the original implementation used in the paper, the algorithm selected a random neighbor and then checked if selecting it would violate the PDB size limit. If so, the algorithm would not select it and terminate. In the current implementation, the algorithm instead loops over all neighbors of the current variable in random order and selects the first one not violating the PDB size limit. If no such neighbor exists, the algorithm terminates.

    "},{"location":"PatternCollectionGenerator/#short_description_of_the_multiple_algorithm_framework_1","title":"Short description of the 'multiple algorithm framework'","text":"

    This algorithm is a general framework for computing a pattern collection for a given planning task. It requires as input a method for computing a single pattern for the given task and a single goal of the task. The algorithm works as follows. It first stores the goals of the task in random order. Then, it repeatedly iterates over all goals and for each goal, it uses the given method for computing a single pattern. If the pattern is new (duplicate detection), it is kept for the final collection. The algorithm runs until reaching a given time limit. Another parameter allows exiting early if no new patterns are found for a certain time ('stagnation'). Further parameters allow enabling blacklisting for the given pattern computation method after a certain time to force some diversification or to enable said blacklisting when stagnating.

    "},{"location":"PatternCollectionGenerator/#implementation_note_about_the_multiple_algorithm_framework_1","title":"Implementation note about the 'multiple algorithm framework'","text":"

    A difference compared to the original implementation used in the paper is that the original implementation of stagnation in the multiple CEGAR/RCG algorithms started counting the time towards stagnation only after having generated a duplicate pattern. Now, time towards stagnation starts counting from the start and is reset to the current time only when having found a new pattern or when enabling blacklisting.

    "},{"location":"PatternCollectionGenerator/#sys-scp_patterns","title":"Sys-SCP patterns","text":"

    Systematically generate larger (interesting) patterns but only keep a pattern if it's useful under a saturated cost partitioning. For details, see

    • Jendrik Seipp. Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 5621-5627. IJCAI, 2019.

      sys_scp(max_pattern_size=infinity, max_pdb_size=2M, max_collection_size=20M, max_patterns=infinity, max_time=100, max_time_per_restart=10, max_evaluations_per_restart=infinity, max_total_evaluations=infinity, saturate=true, create_complete_transition_system=false, pattern_type=interesting_non_negative, ignore_useless_patterns=false, store_dead_ends=true, order=cg_down, random_seed=-1, verbosity=normal)

    • max_pattern_size (int [1, infinity]): maximum number of variables per pattern

    • max_pdb_size (int [1, infinity]): maximum number of states in a PDB
    • max_collection_size (int [1, infinity]): maximum number of states in the pattern collection
    • max_patterns (int [1, infinity]): maximum number of patterns
    • max_time (double [0.0, infinity]): maximum time in seconds for generating patterns
    • max_time_per_restart (double [0.0, infinity]): maximum time in seconds for each restart
    • max_evaluations_per_restart (int [0, infinity]): maximum pattern evaluations per the inner loop
    • max_total_evaluations (int [0, infinity]): maximum total pattern evaluations
    • saturate (bool): only select patterns useful in saturated cost partitionings
    • create_complete_transition_system (bool): create explicit transition system (necessary for tasks with conditional effects)
    • pattern_type ({naive, interesting_general, interesting_non_negative}): type of patterns
      • naive: all patterns up to the given size
      • interesting_general: only consider the union of two disjoint patterns if the union has more information than the individual patterns under a general cost partitioning
      • interesting_non_negative: like interesting_general, but considering non-negative cost partitioning
    • ignore_useless_patterns (bool): ignore patterns that induce no transitions with positive finite cost
    • store_dead_ends (bool): store dead ends in dead end tree (used to prune the search later)
    • order ({random, states_up, states_down, ops_up, ops_down, cg_up, cg_down}): order in which to consider patterns of the same size (based on states in projection, active operators or position of the pattern variables in the partial ordering of the causal graph)
      • random: order randomly
      • states_up: order by increasing number of abstract states
      • states_down: order by decreasing number of abstract states
      • ops_up: order by increasing number of active operators
      • ops_down: order by decreasing number of active operators
      • cg_up: use lexicographical order
      • cg_down: use reverse lexicographical order
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternCollectionGenerator/#systematically_generated_patterns","title":"Systematically generated patterns","text":"

    Generates all (interesting) patterns with up to pattern_max_size variables. For details, see

    • Florian Pommerening, Gabriele Roeger and Malte Helmert. Getting the Most Out of Pattern Databases for Classical Planning. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2357-2364. AAAI Press, 2013.

    The pattern_type=interesting_general setting was introduced in

    • Florian Pommerening, Thomas Keller, Valentina Halasi, Jendrik Seipp, Silvan Sievers and Malte Helmert. Dantzig-Wolfe Decomposition for Cost Partitioning. In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 271-280. AAAI Press, 2021.

      systematic(pattern_max_size=1, pattern_type=interesting_non_negative, verbosity=normal)

    • pattern_max_size (int [1, infinity]): max number of variables per pattern

    • pattern_type ({naive, interesting_general, interesting_non_negative}): type of patterns
      • naive: all patterns up to the given size
      • interesting_general: only consider the union of two disjoint patterns if the union has more information than the individual patterns under a general cost partitioning
      • interesting_non_negative: like interesting_general, but considering non-negative cost partitioning
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternGenerator/","title":"PatternGenerator","text":"

    Factory for single patterns

    "},{"location":"PatternGenerator/#cegar","title":"CEGAR","text":"

    This pattern generator uses the CEGAR algorithm restricted to a random single goal of the task to compute a pattern. See below for a description of the algorithm and some implementation notes. The original algorithm (called single CEGAR) is described in the paper

    • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

      cegar_pattern(max_pdb_size=1000000, max_time=infinity, use_wildcard_plans=true, verbosity=normal, random_seed=-1)

    • max_pdb_size (int [1, infinity]): maximum number of states in the final pattern database (possibly ignored by a singleton pattern consisting of a single goal variable)

    • max_time (double [0.0, infinity]): maximum time in seconds for the pattern generation
    • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"PatternGenerator/#short_description_of_the_cegar_algorithm","title":"Short description of the CEGAR algorithm","text":"

    The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

    "},{"location":"PatternGenerator/#implementation_notes_about_the_cegar_algorithm","title":"Implementation notes about the CEGAR algorithm","text":"

    The following describes differences of the implementation to the original implementation used and described in the paper.

    Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

    In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

    Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

    Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

    "},{"location":"PatternGenerator/#greedy","title":"greedy","text":"
    greedy(max_states=1000000, verbosity=normal)\n
    • max_states (int [1, infinity]): maximal number of abstract states in the pattern database.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternGenerator/#manual_pattern","title":"manual_pattern","text":"
    manual_pattern(pattern, verbosity=normal)\n
    • pattern (list of int): list of variable numbers of the planning task that should be used as pattern.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"PatternGenerator/#random_pattern","title":"Random Pattern","text":"

    This pattern generator implements the 'single randomized causal graph' algorithm described in experiments of the the paper

    • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

    See below for a description of the algorithm and some implementation notes.

    random_pattern(max_pdb_size=1000000, max_time=infinity, bidirectional=true, verbosity=normal, random_seed=-1)\n
    • max_pdb_size (int [1, infinity]): maximum number of states in the final pattern database (possibly ignored by a singleton pattern consisting of a single goal variable)
    • max_time (double [0.0, infinity]): maximum time in seconds for the pattern generation
    • bidirectional (bool): this option decides if the causal graph is considered to be directed or undirected selecting predecessors of already selected variables. If true (default), it is considered to be undirected (precondition-effect edges are bidirectional). If false, it is considered to be directed (a variable is a neighbor only if it is a predecessor.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"PatternGenerator/#short_description_of_the_random_pattern_algorithm","title":"Short description of the random pattern algorithm","text":"

    The random pattern algorithm computes a pattern for a given planning task and a single goal of the task as follows. Starting with the given goal variable, the algorithm executes a random walk on the causal graph. In each iteration, it selects a random causal graph neighbor of the current variable. It terminates if no neighbor fits the pattern due to the size limit or if the time limit is reached.

    "},{"location":"PatternGenerator/#implementation_notes_about_the_random_pattern_algorithm","title":"Implementation notes about the random pattern algorithm","text":"

    In the original implementation used in the paper, the algorithm selected a random neighbor and then checked if selecting it would violate the PDB size limit. If so, the algorithm would not select it and terminate. In the current implementation, the algorithm instead loops over all neighbors of the current variable in random order and selects the first one not violating the PDB size limit. If no such neighbor exists, the algorithm terminates.

    "},{"location":"PruningMethod/","title":"PruningMethod","text":"

    Prune or reorder applicable operators.

    "},{"location":"PruningMethod/#atom-centric_stubborn_sets","title":"Atom-centric stubborn sets","text":"

    Stubborn sets are a state pruning method which computes a subset of applicable actions in each state such that completeness and optimality of the overall search is preserved. Previous stubborn set implementations mainly track information about actions. In contrast, this implementation focuses on atomic propositions (atoms), which often speeds up the computation on IPC benchmarks. For details, see

    • Gabriele Roeger, Malte Helmert, Jendrik Seipp and Silvan Sievers. An Atom-Centric Perspective on Stubborn Sets. In Proceedings of the 13th Annual Symposium on Combinatorial Search (SoCS 2020), pp. 57-65. AAAI Press, 2020.

      atom_centric_stubborn_sets(use_sibling_shortcut=true, atom_selection_strategy=quick_skip, verbosity=normal)

    • use_sibling_shortcut (bool): use variable-based marking in addition to atom-based marking

    • atom_selection_strategy ({fast_downward, quick_skip, static_small, dynamic_small}): Strategy for selecting unsatisfied atoms from action preconditions or the goal atoms. All strategies use the fast_downward strategy for breaking ties.
      • fast_downward: select the atom (v, d) with the variable v that comes first in the Fast Downward variable ordering (which is based on the causal graph)
      • quick_skip: if possible, select an unsatisfied atom whose producers are already marked
      • static_small: select the atom achieved by the fewest number of actions
      • dynamic_small: select the atom achieved by the fewest number of actions that are not yet part of the stubborn set
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

    "},{"location":"PruningMethod/#limited_pruning","title":"Limited pruning","text":"

    Limited pruning applies another pruning method and switches it off after a fixed number of expansions if the pruning ratio is below a given value. The pruning ratio is the sum of all pruned operators divided by the sum of all operators before pruning, considering all previous expansions.

    limited_pruning(pruning, min_required_pruning_ratio=0.2, expansions_before_checking_pruning_ratio=1000, verbosity=normal)\n
    • pruning (PruningMethod): the underlying pruning method to be applied
    • min_required_pruning_ratio (double [0.0, 1.0]): disable pruning if the pruning ratio is lower than this value after 'expansions_before_checking_pruning_ratio' expansions
    • expansions_before_checking_pruning_ratio (int [0, infinity]): number of expansions before deciding whether to disable pruning
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

    Example: To use atom centric stubborn sets and limit them, use

    pruning=limited_pruning(pruning=atom_centric_stubborn_sets(),min_required_pruning_ratio=0.2,expansions_before_checking_pruning_ratio=1000)\n

    in an eager search such as astar.

    "},{"location":"PruningMethod/#no_pruning","title":"No pruning","text":"

    This is a skeleton method that does not perform any pruning, i.e., all applicable operators are applied in all expanded states.

    null(verbosity=normal)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

    "},{"location":"PruningMethod/#stubbornsetsec","title":"StubbornSetsEC","text":"

    Stubborn sets represent a state pruning method which computes a subset of applicable operators in each state such that completeness and optimality of the overall search is preserved. As stubborn sets rely on several design choices, there are different variants thereof. The variant 'StubbornSetsEC' resolves the design choices such that the resulting pruning method is guaranteed to strictly dominate the Expansion Core pruning method. For details, see

    • Martin Wehrle, Malte Helmert, Yusra Alkhazraji and Robert Mattmueller. The Relative Pruning Power of Strong Stubborn Sets and Expansion Core. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 251-259. AAAI Press, 2013.

      stubborn_sets_ec(verbosity=normal)

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

    "},{"location":"PruningMethod/#stubborn_sets_simple","title":"Stubborn sets simple","text":"

    Stubborn sets represent a state pruning method which computes a subset of applicable operators in each state such that completeness and optimality of the overall search is preserved. As stubborn sets rely on several design choices, there are different variants thereof. This stubborn set variant resolves the design choices in a straight-forward way. For details, see the following papers:

    • Yusra Alkhazraji, Martin Wehrle, Robert Mattmueller and Malte Helmert. A Stubborn Set Algorithm for Optimal Planning. In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), pp. 891-892. IOS Press, 2012.

    • Martin Wehrle and Malte Helmert. Efficient Stubborn Sets: Generalized Algorithms and Selection Strategies. In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 323-331. AAAI Press, 2014.

      stubborn_sets_simple(verbosity=normal)

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

    "},{"location":"SearchAlgorithm/","title":"SearchAlgorithm","text":""},{"location":"SearchAlgorithm/#a_search_eager","title":"A* search (eager)","text":"

    A* is a special case of eager best first search that uses g+h as f-function. We break ties using the evaluator. Closed nodes are re-opened.

    astar(eval, lazy_evaluator=<none>, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • eval (Evaluator): evaluator for h-value
    • lazy_evaluator (Evaluator): An evaluator that re-evaluates a state before it is expanded.
    • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    lazy_evaluator: When a state s is taken out of the open list, the lazy evaluator h re-evaluates s. If h(s) changes (for example because h is path-dependent), s is not expanded, but instead reinserted into the open list. This option is currently only present for the A* algorithm.

    "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_eager_search","title":"Equivalent statements using general eager search","text":"
    --search astar(evaluator)\n

    is equivalent to

    --evaluator h=evaluator\n--search eager(tiebreaking([sum([g(), h]), h], unsafe_pruning=false),\n               reopen_closed=true, f_eval=sum([g(), h]))\n
    "},{"location":"SearchAlgorithm/#breadth-first_search","title":"Breadth-first search","text":"

    Breadth-first graph search.

    brfs(single_plan=true, write_plan=true, pruning=null(), verbosity=normal)\n
    • single_plan (bool): Stop search after finding the first (shortest) plan.
    • write_plan (bool): Store the necessary information during search for writing plans once they're found.
    • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#depth-first_search","title":"Depth-first search","text":"

    This is a depth-first tree search that avoids running in cycles by skipping states s that are already visited earlier on the path to s. Doing so, the search becomes complete.

    dfs(single_plan=false, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • single_plan (bool): stop after finding the first plan
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#exhaustive_search","title":"Exhaustive search","text":"

    Dump the reachable state space.

    dump_reachable_search_space(verbosity=normal)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#eager_best-first_search","title":"Eager best-first search","text":"
    eager(open, reopen_closed=false, f_eval=<none>, preferred=[], pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • open (OpenList): open list
    • reopen_closed (bool): reopen closed nodes
    • f_eval (Evaluator): set evaluator for jump statistics. (Optional; if no evaluator is used, jump statistics will not be displayed.)
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#greedy_search_eager","title":"Greedy search (eager)","text":"
    eager_greedy(evals, preferred=[], boost=0, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • evals (list of Evaluator): evaluators
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • boost (int): boost value for preferred operator open lists
    • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Open list: In most cases, eager greedy best first search uses an alternation open list with one queue for each evaluator. If preferred operator evaluators are used, it adds an extra queue for each of these evaluators that includes only the nodes that are generated with a preferred operator. If only one evaluator and no preferred operator evaluator is used, the search does not use an alternation open list but a standard open list with only one queue.

    Closed nodes: Closed node are not re-opened

    "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_eager_search_1","title":"Equivalent statements using general eager search","text":"
    --evaluator h2=eval2\n--search eager_greedy([eval1, h2], preferred=h2, boost=100)\n

    is equivalent to

    --evaluator h1=eval1 --heuristic h2=eval2\n--search eager(alt([single(h1), single(h1, pref_only=true), single(h2), \n                    single(h2, pref_only=true)], boost=100),\n               preferred=h2)\n
    --search eager_greedy([eval1, eval2])\n

    is equivalent to

    --search eager(alt([single(eval1), single(eval2)]))\n
    --evaluator h1=eval1\n--search eager_greedy(h1, preferred=h1)\n

    is equivalent to

    --evaluator h1=eval1\n--search eager(alt([single(h1), single(h1, pref_only=true)]),\n               preferred=h1)\n
    --search eager_greedy(eval1)\n

    is equivalent to

    --search eager(single(eval1))\n
    "},{"location":"SearchAlgorithm/#eager_weighted_a_search","title":"Eager weighted A* search","text":"
    eager_wastar(evals, preferred=[], reopen_closed=true, boost=0, w=1, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • evals (list of Evaluator): evaluators
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • reopen_closed (bool): reopen closed nodes
    • boost (int): boost value for preferred operator open lists
    • w (int): evaluator weight
    • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Open lists and equivalent statements using general eager search: See corresponding notes for \"(Weighted) A* search (lazy)\"

    Note: Eager weighted A search uses an alternation open list while A search uses a tie-breaking open list. Consequently,

    --search eager_wastar([h()], w=1)\n

    is not equivalent to

    --search astar(h())\n
    "},{"location":"SearchAlgorithm/#lazy_enforced_hill-climbing","title":"Lazy enforced hill-climbing","text":"
    ehc(h, preferred_usage=prune_by_preferred, preferred=[], cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • h (Evaluator): heuristic
    • preferred_usage ({prune_by_preferred, rank_preferred_first}): preferred operator usage
      • prune_by_preferred: prune successors achieved by non-preferred operators
      • rank_preferred_first: first insert successors achieved by preferred operators, then those by non-preferred operators
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#ida_search","title":"IDA* search","text":"

    IDA* search with an optional g-value cache.

    idastar(eval, initial_f_limit=0, cache_size=0, single_plan=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • eval (Evaluator): evaluator for h-value. Make sure to use cache_estimates=false.
    • initial_f_limit (int [0, infinity]): initial depth limit
    • cache_size (int [0, infinity]): maximum number of states to cache. For cache_size=infinity the cache fills up until approaching the memory limit, at which point the current number of states becomes the maximum cache size.
    • single_plan (bool): stop after finding the first plan
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#iterative_deepening_search","title":"Iterative deepening search","text":"
    ids(single_plan=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • single_plan (bool): stop after finding the first (shortest) plan
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#iterated_search","title":"Iterated search","text":"
    iterated(algorithm_configs, pass_bound=true, repeat_last=false, continue_on_fail=false, continue_on_solve=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • algorithm_configs (list of SearchAlgorithm): list of search algorithms for each phase
    • pass_bound (bool): use bound from previous search. The bound is the real cost of the plan found before, regardless of the cost_type parameter.
    • repeat_last (bool): repeat last phase of search
    • continue_on_fail (bool): continue search after no solution found
    • continue_on_solve (bool): continue search after solution found
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Note 1: We don't cache heuristic values between search iterations at the moment. If you perform a LAMA-style iterative search, heuristic values will be computed multiple times.

    Note 2: The configuration

    --search \"iterated([lazy_wastar([ipdb()],w=10), lazy_wastar([ipdb()],w=5), lazy_wastar([ipdb()],w=3), lazy_wastar([ipdb()],w=2), lazy_wastar([ipdb()],w=1)])\"\n

    would perform the preprocessing phase of the ipdb heuristic 5 times (once before each iteration).

    To avoid this, use heuristic predefinition, which avoids duplicate preprocessing, as follows:

    --evaluator \"h=ipdb()\" --search \"iterated([lazy_wastar([h],w=10), lazy_wastar([h],w=5), lazy_wastar([h],w=3), lazy_wastar([h],w=2), lazy_wastar([h],w=1)])\"\n

    Note 3: If you reuse the same landmark count heuristic (using heuristic predefinition) between iterations, the path data (that is, landmark status for each visited state) will be saved between iterations.

    "},{"location":"SearchAlgorithm/#iterated_width_search","title":"Iterated width search","text":"
    iw(width=2, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • width (int [1, 2]): maximum conjunction size
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"SearchAlgorithm/#lazy_best-first_search","title":"Lazy best-first search","text":"
    lazy(open, reopen_closed=false, preferred=[], randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • open (OpenList): open list
    • reopen_closed (bool): reopen closed nodes
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • randomize_successors (bool): randomize the order in which successors are generated
    • preferred_successors_first (bool): consider preferred operators first
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

    "},{"location":"SearchAlgorithm/#greedy_search_lazy","title":"Greedy search (lazy)","text":"
    lazy_greedy(evals, preferred=[], reopen_closed=false, boost=1000, randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • evals (list of Evaluator): evaluators
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • reopen_closed (bool): reopen closed nodes
    • boost (int): boost value for alternation queues that are restricted to preferred operator nodes
    • randomize_successors (bool): randomize the order in which successors are generated
    • preferred_successors_first (bool): consider preferred operators first
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

    Open lists: In most cases, lazy greedy best first search uses an alternation open list with one queue for each evaluator. If preferred operator evaluators are used, it adds an extra queue for each of these evaluators that includes only the nodes that are generated with a preferred operator. If only one evaluator and no preferred operator evaluator is used, the search does not use an alternation open list but a standard open list with only one queue.

    "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_lazy_search","title":"Equivalent statements using general lazy search","text":"
    --evaluator h2=eval2\n--search lazy_greedy([eval1, h2], preferred=h2, boost=100)\n

    is equivalent to

    --evaluator h1=eval1 --heuristic h2=eval2\n--search lazy(alt([single(h1), single(h1, pref_only=true), single(h2),\n                  single(h2, pref_only=true)], boost=100),\n              preferred=h2)\n
    --search lazy_greedy([eval1, eval2], boost=100)\n

    is equivalent to

    --search lazy(alt([single(eval1), single(eval2)], boost=100))\n
    --evaluator h1=eval1\n--search lazy_greedy(h1, preferred=h1)\n

    is equivalent to

    --evaluator h1=eval1\n--search lazy(alt([single(h1), single(h1, pref_only=true)], boost=1000),\n              preferred=h1)\n
    --search lazy_greedy(eval1)\n

    is equivalent to

    --search lazy(single(eval1))\n
    "},{"location":"SearchAlgorithm/#weighted_a_search_lazy","title":"(Weighted) A* search (lazy)","text":"

    Weighted A* is a special case of lazy best first search.

    lazy_wastar(evals, preferred=[], reopen_closed=true, boost=1000, w=1, randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
    • evals (list of Evaluator): evaluators
    • preferred (list of Evaluator): use preferred operators of these evaluators
    • reopen_closed (bool): reopen closed nodes
    • boost (int): boost value for preferred operator open lists
    • w (int): evaluator weight
    • randomize_successors (bool): randomize the order in which successors are generated
    • preferred_successors_first (bool): consider preferred operators first
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
    • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output

    Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

    Open lists: In the general case, it uses an alternation open list with one queue for each evaluator h that ranks the nodes by g + w * h. If preferred operator evaluators are used, it adds for each of the evaluators another such queue that only inserts nodes that are generated by preferred operators. In the special case with only one evaluator and no preferred operator evaluators, it uses a single queue that is ranked by g + w * h.

    "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_lazy_search_1","title":"Equivalent statements using general lazy search","text":"
    --evaluator h1=eval1\n--search lazy_wastar([h1, eval2], w=2, preferred=h1,\n                     bound=100, boost=500)\n

    is equivalent to

    --evaluator h1=eval1 --heuristic h2=eval2\n--search lazy(alt([single(sum([g(), weight(h1, 2)])),\n                   single(sum([g(), weight(h1, 2)]), pref_only=true),\n                   single(sum([g(), weight(h2, 2)])),\n                   single(sum([g(), weight(h2, 2)]), pref_only=true)],\n                  boost=500),\n              preferred=h1, reopen_closed=true, bound=100)\n
    --search lazy_wastar([eval1, eval2], w=2, bound=100)\n

    is equivalent to

    --search lazy(alt([single(sum([g(), weight(eval1, 2)])),\n                   single(sum([g(), weight(eval2, 2)]))],\n                  boost=1000),\n              reopen_closed=true, bound=100)\n
    --search lazy_wastar([eval1, eval2], bound=100, boost=0)\n

    is equivalent to

    --search lazy(alt([single(sum([g(), eval1])),\n                   single(sum([g(), eval2]))])\n              reopen_closed=true, bound=100)\n
    --search lazy_wastar(eval1, w=2)\n

    is equivalent to

    --search lazy(single(sum([g(), weight(eval1, 2)])), reopen_closed=true)\n
    "},{"location":"ShrinkStrategy/","title":"ShrinkStrategy","text":"

    This page describes the various shrink strategies supported by the planner.

    "},{"location":"ShrinkStrategy/#bismulation_based_shrink_strategy","title":"Bismulation based shrink strategy","text":"

    This shrink strategy implements the algorithm described in the paper:

    • Raz Nissim, Joerg Hoffmann and Malte Helmert. Computing Perfect Heuristics in Polynomial Time: On Bisimulation and Merge-and-Shrink Abstractions in Optimal Planning.. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI 2011), pp. 1983-1990. AAAI Press, 2011.

      shrink_bisimulation(greedy=false, at_limit=return)

    • greedy (bool): use greedy bisimulation

    • at_limit ({return, use_up}): what to do when the size limit is hit
      • return: stop without refining the equivalence class further
      • use_up: continue refining the equivalence class until the size limit is hit

    shrink_bisimulation(greedy=true): Combine this with the merge-and-shrink options max_states=infinity and threshold_before_merge=1 and with the linear merge strategy reverse_level to obtain the variant 'greedy bisimulation without size limit', called M&S-gop in the IJCAI 2011 paper. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before shrinking (and no label reduction before merging).

    shrink_bisimulation(greedy=false): Combine this with the merge-and-shrink option max_states=N (where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000) and with the linear merge strategy reverse_level to obtain the variant 'exact bisimulation with a size limit', called DFP-bop in the IJCAI 2011 paper. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before shrinking (and no label reduction before merging).

    "},{"location":"ShrinkStrategy/#f-preserving_shrink_strategy","title":"f-preserving shrink strategy","text":"

    This shrink strategy implements the algorithm described in the paper:

    • Malte Helmert, Patrik Haslum and Joerg Hoffmann. Flexible Abstraction Heuristics for Optimal Sequential Planning. In Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS 2007), pp. 176-183. AAAI Press, 2007.

      shrink_fh(random_seed=-1, shrink_f=high, shrink_h=low)

    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    • shrink_f ({high, low}): in which direction the f based shrink priority is ordered
      • high: prefer shrinking states with high value
      • low: prefer shrinking states with low value
    • shrink_h ({high, low}): in which direction the h based shrink priority is ordered
      • high: prefer shrinking states with high value
      • low: prefer shrinking states with low value

    Note: The strategy first partitions all states according to their combination of f- and h-values. These partitions are then sorted, first according to their f-value, then according to their h-value (increasing or decreasing, depending on the chosen options). States sorted last are shrinked together until reaching max_states.

    shrink_fh(): Combine this with the merge-and-shrink option max_states=N (where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000) and the linear merge startegy cg_goal_level to obtain the variant 'f-preserving shrinking of transition systems', called HHH in the IJCAI 2011 paper. Also see bisimulation based shrink strategy. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before merging (and no label reduction before shrinking). We also recommend using full pruning with this shrink strategy, because both distances from the initial state and to the goal states must be computed anyway, and because the existence of only one dead state causes this shrink strategy to always use the map-based approach for partitioning states rather than the more efficient vector-based approach.

    "},{"location":"ShrinkStrategy/#random","title":"Random","text":"
    shrink_random(random_seed=-1)\n
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"SubtaskGenerator/","title":"SubtaskGenerator","text":"

    Subtask generator (used by the CEGAR heuristic).

    "},{"location":"SubtaskGenerator/#goals","title":"goals","text":"
    goals(order=hadd_down, random_seed=-1)\n
    • order ({original, random, hadd_up, hadd_down}): ordering of goal or landmark facts
      • original: according to their (internal) variable index
      • random: according to a random permutation
      • hadd_up: according to their h^add value, lowest first
      • hadd_down: according to their h^add value, highest first
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    "},{"location":"SubtaskGenerator/#landmarks","title":"landmarks","text":"
    landmarks(order=hadd_down, random_seed=-1, combine_facts=true)\n
    • order ({original, random, hadd_up, hadd_down}): ordering of goal or landmark facts
      • original: according to their (internal) variable index
      • random: according to a random permutation
      • hadd_up: according to their h^add value, lowest first
      • hadd_down: according to their h^add value, highest first
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • combine_facts (bool): combine landmark facts with domain abstraction
    "},{"location":"SubtaskGenerator/#original","title":"original","text":"
    original(copies=1)\n
    • copies (int [1, infinity]): number of task copies
    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"

    Choose a plugin type on the left to see its documentation.

    "},{"location":"AbstractTask/","title":"AbstractTask","text":""},{"location":"AbstractTask/#cost-adapted_task","title":"Cost-adapted task","text":"

    A cost-adapting transformation of the root task.

    adapt_costs(cost_type=normal)\n
    • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
      • normal: all actions are accounted for with their real cost
      • one: all actions are accounted for as unit cost
      • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
    "},{"location":"AbstractTask/#no_transform","title":"no_transform","text":"
    no_transform()\n
    "},{"location":"AbstractionGenerator/","title":"AbstractionGenerator","text":"

    Create abstractions for cost partitioning heuristics.

    "},{"location":"AbstractionGenerator/#cartesian_abstraction_generator","title":"Cartesian abstraction generator","text":"
    cartesian(subtasks=[landmarks(order=random), goals(order=random)], max_states=infinity, max_transitions=1M, max_time=infinity, pick_flawed_abstract_state=batch_min_h, pick_split=max_cover, tiebreak_split=max_refined, memory_padding=500, dot_graph_verbosity=silent, random_seed=-1, max_concrete_states_per_abstract_state=infinity, max_state_expansions=1M, verbosity=normal)\n
    • subtasks (list of SubtaskGenerator): subtask generators
    • max_states (int [1, infinity]): maximum sum of abstract states over all abstractions
    • max_transitions (int [0, infinity]): maximum sum of state-changing transitions (excluding self-loops) over all abstractions
    • max_time (double [0.0, infinity]): maximum time in seconds for building abstractions
    • pick_flawed_abstract_state ({first, first_on_shortest_path, random, min_h, max_h, batch_min_h}): flaw-selection strategy
      • first: Consider first encountered flawed abstract state and a random concrete state.
      • first_on_shortest_path: Follow the arbitrary solution in the shortest path tree (no flaw search). Consider first encountered flawed abstract state and a random concrete state.
      • random: Collect all flawed abstract states and then consider a random abstract state and a random concrete state.
      • min_h: Collect all flawed abstract states and then consider a random abstract state with minimum h value and a random concrete state.
      • max_h: Collect all flawed abstract states and then consider a random abstract state with maximum h value and a random concrete state.
      • batch_min_h: Collect all flawed abstract states and iteratively refine them (by increasing h value). Only start a new flaw search once all remaining flawed abstract states are refined. For each abstract state consider all concrete states.
    • pick_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy
      • random: select a random variable (among all eligible variables)
      • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
      • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
      • min_cg: order by increasing position in partial ordering of causal graph
      • max_cg: order by decreasing position in partial ordering of causal graph
      • max_cover: compute split that covers the maximum number of flaws for several concrete states.
    • tiebreak_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy for breaking ties
      • random: select a random variable (among all eligible variables)
      • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
      • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
      • min_cg: order by increasing position in partial ordering of causal graph
      • max_cg: order by decreasing position in partial ordering of causal graph
      • max_cover: compute split that covers the maximum number of flaws for several concrete states.
    • memory_padding (int [0, infinity]): amount of extra memory in MB to reserve for recovering from out-of-memory situations gracefully. When the memory runs out, we stop refining and start the search. Due to memory fragmentation, the memory used for building the abstraction (states, transitions, etc.) often can't be reused for things that require big continuous blocks of memory. It is for this reason that we require a rather large amount of memory padding by default.
    • dot_graph_verbosity ({silent, write_to_console, write_to_file}): verbosity of printing/writing dot graphs
      • silent:
      • write_to_console:
      • write_to_file:
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • max_concrete_states_per_abstract_state (int [1, infinity]): maximum number of flawed concrete states stored per abstract state
    • max_state_expansions (int [1, infinity]): maximum number of state expansions per flaw search
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"AbstractionGenerator/#projections","title":"projections","text":"

    Projection generator

    projections(patterns=<none>, dominance_pruning=false, combine_labels=true, create_complete_transition_system=false, verbosity=normal)\n
    • patterns (PatternCollectionGenerator): pattern generation method
    • dominance_pruning (bool): prune dominated patterns
    • combine_labels (bool): group labels that only induce parallel transitions
    • create_complete_transition_system (bool): create explicit transition system
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"ConstraintGenerator/","title":"ConstraintGenerator","text":""},{"location":"ConstraintGenerator/#delete_relaxation_constraints","title":"Delete relaxation constraints","text":"

    Operator-counting constraints based on the delete relaxation. By default the constraints encode an easy-to-compute relaxation of h^+^. With the right settings, these constraints can be used to compute the optimal delete-relaxation heuristic h^+^ (see example below). For details, see

    • Tatsuya Imai and Alex Fukunaga. On a practical, integer-linear programming model for delete-freetasks and its use as a heuristic for cost-optimal planning. Journal of Artificial Intelligence Research 54:631-677. 2015.

      delete_relaxation_constraints(use_time_vars=false, use_integer_vars=false)

    • use_time_vars (bool): use variables for time steps. With these additional variables the constraints enforce an order between the selected operators. Leaving this off (default) corresponds to the time relaxation by Imai and Fukunaga. Switching it on, can increase the heuristic value but will increase the size of the constraints which has a strong impact on runtime. Constraints involving time variables use a big-M encoding, so they are more useful if used with integer variables.

    • use_integer_vars (bool): restrict auxiliary variables to integer values. These variables encode whether operators are used, facts are reached, which operator first achieves which fact, and in which order the operators are used. Restricting them to integers generally improves the heuristic value at the cost of increased runtime.

    Example: To compute the optimal delete-relaxation heuristic h^+^, use

    operatorcounting([delete_relaxation_constraints(use_time_vars=true, use_integer_vars=true)], use_integer_operator_counts=true))\n
    "},{"location":"ConstraintGenerator/#lm-cut_landmark_constraints","title":"LM-cut landmark constraints","text":"

    Computes a set of landmarks in each state using the LM-cut method. For each landmark L the constraint sum_{o in L} Count_o >= 1 is added to the operator-counting LP temporarily. After the heuristic value for the state is computed, all temporary constraints are removed again. For details, see

    • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

    • Blai Bonet. An admissible heuristic for SAS+ planning obtained from the state equation. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2268-2274. AAAI Press, 2013.

      lmcut_constraints()

    "},{"location":"ConstraintGenerator/#saturated_posthoc_optimization_constraints_for_abstractions","title":"(Saturated) posthoc optimization constraints for abstractions","text":"
    pho_abstraction_constraints(abstractions=<none>, saturated=true)\n
    • abstractions (list of AbstractionGenerator): abstraction generation methods
    • saturated (bool): use saturated instead of full operator costs in constraints
    "},{"location":"ConstraintGenerator/#posthoc_optimization_constraints","title":"Posthoc optimization constraints","text":"

    The generator will compute a PDB for each pattern and add the constraint h(s) <= sum_{o in relevant(h)} Count_o. For details, see

    • Florian Pommerening, Gabriele Roeger and Malte Helmert. Getting the Most Out of Pattern Databases for Classical Planning. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2357-2364. AAAI Press, 2013.

      pho_constraints(patterns=systematic(2))

    • patterns (PatternCollectionGenerator): pattern generation method

    "},{"location":"ConstraintGenerator/#state_equation_constraints","title":"State equation constraints","text":"

    For each fact, a permanent constraint is added that considers the net change of the fact, i.e., the total number of times the fact is added minus the total number of times is removed. The bounds of each constraint depend on the current state and the goal state and are updated in each state. For details, see

    • Menkes van den Briel, J. Benton, Subbarao Kambhampati and Thomas Vossen. An LP-based heuristic for optimal planning. In Proceedings of the Thirteenth International Conference on Principles and Practice of Constraint Programming (CP 2007), pp. 651-665. Springer-Verlag, 2007.

    • Blai Bonet. An admissible heuristic for SAS+ planning obtained from the state equation. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2268-2274. AAAI Press, 2013.

    • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

      state_equation_constraints(verbosity=normal)

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    "},{"location":"Evaluator/","title":"Evaluator","text":"

    An evaluator specification is either a newly created evaluator instance or an evaluator that has been defined previously. This page describes how one can specify a new evaluator instance. For re-using evaluators, see OptionSyntax#Evaluator_Predefinitions.

    If the evaluator is a heuristic, definitions of properties in the descriptions below:

    • admissible: h(s) <= h*(s) for all states s
    • consistent: h(s) <= c(s, s') + h(s') for all states s connected to states s' by an action with cost c(s, s')
    • safe: h(s) = infinity is only true for states with h*(s) = infinity
    • preferred operators: this heuristic identifies preferred operators

    This feature type can be bound to variables using let(variable_name, variable_definition, expression) where expression can use variable_name. Predefinitions using --evaluator, --heuristic, and --landmarks are automatically transformed into let-expressions but are deprecated.

    "},{"location":"Evaluator/#additive_heuristic","title":"Additive heuristic","text":"
    add(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

    Properties:

    • admissible: no
    • consistent: no
    • safe: yes for tasks without axioms
    • preferred operators: yes
    "},{"location":"Evaluator/#blind_heuristic","title":"Blind heuristic","text":"

    Returns cost of cheapest action for non-goal states, 0 for goal states

    blind(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#context-enhanced_additive_heuristic","title":"Context-enhanced additive heuristic","text":"
    cea(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

    Properties:

    • admissible: no
    • consistent: no
    • safe: no
    • preferred operators: yes
    "},{"location":"Evaluator/#additive_cartesian_cegar_heuristic","title":"Additive Cartesian CEGAR heuristic","text":"

    See the paper introducing counterexample-guided Cartesian abstraction refinement (CEGAR) for classical planning:

    • Jendrik Seipp and Malte Helmert. Counterexample-guided Cartesian Abstraction Refinement. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 347-351. AAAI Press, 2013.

    and the paper showing how to make the abstractions additive:

    • Jendrik Seipp and Malte Helmert. Diverse and Additive Cartesian Abstraction Heuristics. In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 289-297. AAAI Press, 2014.

    For more details on Cartesian CEGAR and saturated cost partitioning, see the journal paper

    • Jendrik Seipp and Malte Helmert. Counterexample-Guided Cartesian Abstraction Refinement for Classical Planning. Journal of Artificial Intelligence Research 62:535-577. 2018.

    For a description of the incremental search, see the paper

    • Jendrik Seipp, Samuel von Allmen and Malte Helmert. Incremental Search for Counterexample-Guided Cartesian Abstraction Refinement. In Proceedings of the 30th International Conference on Automated Planning and Scheduling (ICAPS 2020), pp. 244-248. AAAI Press, 2020.

    Finally, we describe advanced flaw selection strategies here:

    • David Speck and Jendrik Seipp. New Refinement Strategies for Cartesian Abstractions. In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. to appear. AAAI Press, 2022.

      cegar(subtasks=[landmarks(order=random), goals(order=random)], max_states=infinity, max_transitions=1M, max_time=infinity, pick_flawed_abstract_state=batch_min_h, pick_split=max_cover, tiebreak_split=max_refined, memory_padding=500, dot_graph_verbosity=silent, random_seed=-1, max_concrete_states_per_abstract_state=infinity, max_state_expansions=1M, use_general_costs=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

    • subtasks (list of SubtaskGenerator): subtask generators

    • max_states (int [1, infinity]): maximum sum of abstract states over all abstractions
    • max_transitions (int [0, infinity]): maximum sum of state-changing transitions (excluding self-loops) over all abstractions
    • max_time (double [0.0, infinity]): maximum time in seconds for building abstractions
    • pick_flawed_abstract_state ({first, first_on_shortest_path, random, min_h, max_h, batch_min_h}): flaw-selection strategy
      • first: Consider first encountered flawed abstract state and a random concrete state.
      • first_on_shortest_path: Follow the arbitrary solution in the shortest path tree (no flaw search). Consider first encountered flawed abstract state and a random concrete state.
      • random: Collect all flawed abstract states and then consider a random abstract state and a random concrete state.
      • min_h: Collect all flawed abstract states and then consider a random abstract state with minimum h value and a random concrete state.
      • max_h: Collect all flawed abstract states and then consider a random abstract state with maximum h value and a random concrete state.
      • batch_min_h: Collect all flawed abstract states and iteratively refine them (by increasing h value). Only start a new flaw search once all remaining flawed abstract states are refined. For each abstract state consider all concrete states.
    • pick_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy
      • random: select a random variable (among all eligible variables)
      • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
      • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
      • min_cg: order by increasing position in partial ordering of causal graph
      • max_cg: order by decreasing position in partial ordering of causal graph
      • max_cover: compute split that covers the maximum number of flaws for several concrete states.
    • tiebreak_split ({random, min_unwanted, max_unwanted, min_refined, max_refined, min_hadd, max_hadd, min_cg, max_cg, max_cover}): split-selection strategy for breaking ties
      • random: select a random variable (among all eligible variables)
      • min_unwanted: select an eligible variable which has the least unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • max_unwanted: select an eligible variable which has the most unwanted values (number of values of v that land in the abstract state whose h-value will probably be raised) in the flaw state
      • min_refined: select an eligible variable which is the least refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • max_refined: select an eligible variable which is the most refined (-1 * (remaining_values(v) / original_domain_size(v))) in the flaw state
      • min_hadd: select an eligible variable with minimal h^add(s_0) value over all facts that need to be removed from the flaw state
      • max_hadd: select an eligible variable with maximal h^add(s_0) value over all facts that need to be removed from the flaw state
      • min_cg: order by increasing position in partial ordering of causal graph
      • max_cg: order by decreasing position in partial ordering of causal graph
      • max_cover: compute split that covers the maximum number of flaws for several concrete states.
    • memory_padding (int [0, infinity]): amount of extra memory in MB to reserve for recovering from out-of-memory situations gracefully. When the memory runs out, we stop refining and start the search. Due to memory fragmentation, the memory used for building the abstraction (states, transitions, etc.) often can't be reused for things that require big continuous blocks of memory. It is for this reason that we require a rather large amount of memory padding by default.
    • dot_graph_verbosity ({silent, write_to_console, write_to_file}): verbosity of printing/writing dot graphs
      • silent:
      • write_to_console:
      • write_to_file:
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
    • max_concrete_states_per_abstract_state (int [1, infinity]): maximum number of flawed concrete states stored per abstract state
    • max_state_expansions (int [1, infinity]): maximum number of state expansions per flaw search
    • use_general_costs (bool): allow negative costs in cost partitioning
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: yes
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#causal_graph_heuristic","title":"Causal graph heuristic","text":"
    cg(max_cache_size=1000000, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • max_cache_size (int [0, infinity]): maximum number of cached entries per variable (set to 0 to disable cache)
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

    Properties:

    • admissible: no
    • consistent: no
    • safe: no
    • preferred operators: yes
    "},{"location":"Evaluator/#ff_heuristic","title":"FF heuristic","text":"
    ff(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

    Properties:

    • admissible: no
    • consistent: no
    • safe: yes for tasks without axioms
    • preferred operators: yes
    "},{"location":"Evaluator/#goal_count_heuristic","title":"Goal count heuristic","text":"
    goalcount(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: ignored by design
    • conditional effects: supported
    • axioms: supported

    Properties:

    • admissible: no
    • consistent: no
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#hm_heuristic","title":"h^m heuristic","text":"
    hm(m=2, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • m (int [1, infinity]): subset size
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: ignored
    • axioms: ignored

    Properties:

    • admissible: yes for tasks without conditional effects or axioms
    • consistent: yes for tasks without conditional effects or axioms
    • safe: yes for tasks without conditional effects or axioms
    • preferred operators: no
    "},{"location":"Evaluator/#max_heuristic","title":"Max heuristic","text":"
    hmax(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: supported
    • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

    Properties:

    • admissible: yes for tasks without axioms
    • consistent: yes for tasks without axioms
    • safe: yes for tasks without axioms
    • preferred operators: no
    "},{"location":"Evaluator/#landmark_cost_partitioning_heuristic","title":"Landmark cost partitioning heuristic","text":"

    Landmark progression is implemented according to the following paper:

    • Clemens B\u00fcchner, Thomas Keller, Salom\u00e9 Eriksson and Malte Helmert. Landmarks Progression in Heuristic Search. In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

      landmark_cost_partitioning(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true, cost_partitioning=uniform, scoring_function=max_heuristic_per_stolen_costs, alm=true, lpsolver=cplex, random_seed=-1)

    • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

    • pref (bool): enable preferred operators (see note below)
    • prog_goal (bool): Use goal progression.
    • prog_gn (bool): Use greedy-necessary ordering progression.
    • prog_r (bool): Use reasonable ordering progression.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • cost_partitioning ({optimal, uniform, opportunistic_uniform, greedy_zero_one, saturated, canonical, pho}): strategy for partitioning operator costs among landmarks
      • optimal: use optimal (LP-based) cost partitioning
      • uniform: partition operator costs uniformly among all landmarks achieved by that operator
      • opportunistic_uniform: like uniform, but order landmarks and reuse costs not consumed by earlier landmarks
      • greedy_zero_one: order landmarks and give each landmark the costs of all the operators it contains
      • saturated: like greedy_zero_one, but reuse costs not consumed by earlier landmarks
      • canonical: canonical heuristic over landmarks
      • pho: post-hoc optimization over landmarks
    • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
      • max_heuristic: order by decreasing heuristic value for the given state
      • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
      • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
    • alm (bool): use action landmarks
    • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
      • cplex: commercial solver by IBM
      • soplex: open source solver by ZIB
    • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

    Note: to use an LP solver, you must build the planner with LP support. See build instructions.

    Usage with A*: We recommend to add this heuristic as lazy_evaluator when using it in the A* algorithm. This way, the heuristic is recomputed before a state is expanded, leading to improved estimates that incorporate all knowledge gained from paths that were found after the state was inserted into the open list.

    Consistency: The heuristic is consistent along single paths if it is set as lazy_evaluator; i.e. when expanding s then we have h(s) <= h(s')+cost(a) for all successors s' of s reached with a. But newly found paths to s can increase h(s), at which point the above inequality might not hold anymore.

    Optimal Cost Partitioning: To use cost_partitioning=optimal, you must build the planner with LP support. See build instructions.

    Preferred operators: Preferred operators should not be used for optimal planning. See Landmark sum heuristic for more information on using preferred operators; the comments there also apply to this heuristic.

    Supported language features:

    • action costs: supported
    • conditional_effects: supported if the LandmarkFactory supports them; otherwise not supported
    • axioms: not allowed

    Properties:

    • preferred operators: yes (if enabled; see pref option)
    • admissible: yes
    • consistent: no; see document note about consistency
    • safe: yes
    "},{"location":"Evaluator/#landmark_sum_heuristic","title":"Landmark sum heuristic","text":"

    Landmark progression is implemented according to the following paper:

    • Clemens B\u00fcchner, Thomas Keller, Salom\u00e9 Eriksson and Malte Helmert. Landmarks Progression in Heuristic Search. In Proceedings of the Thirty-Third International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. AAAI Press, 2023.

      landmark_sum(lm_factory, pref=false, prog_goal=true, prog_gn=true, prog_r=true, verbosity=normal, transform=no_transform(), cache_estimates=true)

    • lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

    • pref (bool): enable preferred operators (see note below)
    • prog_goal (bool): Use goal progression.
    • prog_gn (bool): Use greedy-necessary ordering progression.
    • prog_r (bool): Use reasonable ordering progression.
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Note on performance for satisficing planning: The cost of a landmark is based on the cost of the operators that achieve it. For satisficing search this can be counterproductive since it is often better to focus on distance from goal (i.e. length of the plan) rather than cost. In experiments we achieved the best performance using the option 'transform=adapt_costs(one)' to enforce unit costs.

    Preferred operators: Computing preferred operators is only enabled when setting pref=true because it has a nontrivial runtime cost. Using the heuristic for preferred operators without setting pref=true has no effect. Our implementation to compute preferred operators based on landmarks differs from the description in the literature (see reference above).The original implementation computes two kinds of preferred operators:

    1. If there is an applicable operator that reaches a landmark, all such operators are preferred.
    2. If no such operators exist, perform an FF-style relaxed exploration towards the nearest landmarks (according to the landmark orderings) and use the preferred operators of this exploration.

    Our implementation only considers preferred operators of the first type and does not include the second type. The rationale for this change is that it reduces code complexity and helps more cleanly separate landmark-based and FF-based computations in LAMA-like planner configurations. In our experiments, only considering preferred operators of the first type reduces performance when using the heuristic and its preferred operators in isolation but improves performance when using this heuristic in conjunction with the FF heuristic, as in LAMA-like planner configurations.

    Supported language features:

    • action costs: supported
    • conditional_effects: supported if the LandmarkFactory supports them; otherwise ignored
    • axioms: ignored

    Properties:

    • preferred operators: yes (if enabled; see pref option)
    • admissible: no
    • consistent: no
    • safe: yes except on tasks with axioms or on tasks with conditional effects when using a LandmarkFactory not supporting them
    "},{"location":"Evaluator/#landmark-cut_heuristic","title":"Landmark-cut heuristic","text":"
    lmcut(verbosity=normal, transform=no_transform(), cache_estimates=true)\n
    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates

    Supported language features:

    • action costs: supported
    • conditional effects: not supported
    • axioms: not supported

    Properties:

    • admissible: yes
    • consistent: no
    • safe: yes
    • preferred operators: no
    "},{"location":"Evaluator/#merge-and-shrink_heuristic","title":"Merge-and-shrink heuristic","text":"

    This heuristic implements the algorithm described in the following paper:

    • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

    For a more exhaustive description of merge-and-shrink, see the journal paper

    • Silvan Sievers and Malte Helmert. Merge-and-Shrink: A Compositional Theory of Transformations of Factored Transition Systems. Journal of Artificial Intelligence Research 71:781-883. 2021.

    The following paper describes how to improve the DFP merge strategy with tie-breaking, and presents two new merge strategies (dyn-MIASM and SCC-DFP):

    • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Automated Planning and Scheduling (ICAPS 2016), pp. 294-298. AAAI Press, 2016.

    Details of the algorithms and the implementation are described in the paper

    • Silvan Sievers. Merge-and-Shrink Heuristics for Classical Planning: Efficient Implementation and Partial Abstractions. In Proceedings of the 11th Annual Symposium on Combinatorial Search (SoCS 2018), pp. 90-98. AAAI Press, 2018.

      merge_and_shrink(verbosity=normal, transform=no_transform(), cache_estimates=true, merge_strategy, shrink_strategy, label_reduction=, prune_unreachable_states=true, prune_irrelevant_states=true, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, main_loop_max_time=infinity)

    • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

      • silent: only the most basic output
      • normal: relevant information to monitor progress
      • verbose: full output
      • debug: like verbose with additional debug output
    • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
    • cache_estimates (bool): cache heuristic estimates
    • merge_strategy (MergeStrategy): See detailed documentation for merge strategies. We currently recommend SCC-DFP, which can be achieved using merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order]))
    • shrink_strategy (ShrinkStrategy): See detailed documentation for shrink strategies. We currently recommend non-greedy shrink_bisimulation, which can be achieved using shrink_strategy=shrink_bisimulation(greedy=false)
    • label_reduction (LabelReduction): See detailed documentation for labels. There is currently only one 'option' to use label_reduction, which is label_reduction=exact Also note the interaction with shrink strategies.
    • prune_unreachable_states (bool): If true, prune abstract states unreachable from the initial state.
    • prune_irrelevant_states (bool): If true, prune abstract states from which no goal state can be reached.
    • max_states (int [-1, infinity]): maximum transition system size allowed at any time point.
    • max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.
    • threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.
    • main_loop_max_time (double [0.0, infinity]): A limit in seconds on the runtime of the main loop of the algorithm. If the limit is exceeded, the algorithm terminates, potentially returning a factored transition system with several factors. Also note that the time limit is only checked between transformations of the main loop, but not during, so it can be exceeded if a transformation is runtime-intense.
    • Note: Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic transition systems on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when only perfect shrinking is performed.

      Note: When pruning unreachable states, admissibility and consistency is only guaranteed for reachable states and transitions between reachable states. While this does not impact regular A* search which will never encounter any unreachable state, it impacts techniques like symmetry-based pruning: a reachable state which is mapped to an unreachable symmetric state (which hence is pruned) would falsely be considered a dead-end and also be pruned, thus violating optimality of the search.

      Note: When using a time limit on the main loop of the merge-and-shrink algorithm, the heuristic will compute the maximum over all heuristics induced by the remaining factors if terminating the merge-and-shrink algorithm early. Exception: if there is an unsolvable factor, it will be used as the exclusive heuristic since the problem is unsolvable.

      Note: A currently recommended good configuration uses bisimulation based shrinking, the merge strategy SCC-DFP, and the appropriate label reduction setting (max_states has been altered to be between 10k and 200k in the literature). As merge-and-shrink heuristics can be expensive to compute, we also recommend limiting time by setting main_loop_max_time to a finite value. A sensible value would be half of the time allocated for the planner.

      merge_and_shrink(shrink_strategy=shrink_bisimulation(greedy=false),merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance(),dfp(),total_order()])),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50k,threshold_before_merge=1)\n

      Supported language features:

      • action costs: supported
      • conditional effects: supported (but see note)
      • axioms: not supported

      Properties:

      • admissible: yes (but see note)
      • consistent: yes (but see note)
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#operator-counting_heuristic","title":"Operator-counting heuristic","text":"

      An operator-counting heuristic computes a linear program (LP) in each state. The LP has one variable Count_o for each operator o that represents how often the operator is used in a plan. Operator-counting constraints are linear constraints over these varaibles that are guaranteed to have a solution with Count_o = occurrences(o, pi) for every plan pi. Minimizing the total cost of operators subject to some operator-counting constraints is an admissible heuristic. For details, see

      • Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet. LP-based Heuristics for Cost-optimal Planning. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press, 2014.

        operatorcounting(constraint_generators, use_integer_operator_counts=false, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

      • constraint_generators (list of ConstraintGenerator): methods that generate constraints over operator-counting variables

      • use_integer_operator_counts (bool): restrict operator-counting variables to integer values. Computing the heuristic with integer variables can produce higher values but requires solving a MIP instead of an LP which is generally more computationally expensive. Turning this option on can thus drastically increase the runtime.
      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)

      Properties:

      • admissible: yes
      • consistent: yes, if all constraint generators represent consistent heuristics
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#basic_evaluators","title":"Basic Evaluators","text":""},{"location":"Evaluator/#constant_evaluator","title":"Constant evaluator","text":"

      Returns a constant value.

      const(value=1, verbosity=normal)\n
      • value (int [0, infinity]): the constant value
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#g-value_evaluator","title":"g-value evaluator","text":"

      Returns the g-value (path cost) of the search node.

      g(verbosity=normal)\n
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#max_evaluator","title":"Max evaluator","text":"

      Calculates the maximum of the sub-evaluators.

      max(evals, verbosity=normal)\n
      • evals (list of Evaluator): at least one evaluator
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#preference_evaluator","title":"Preference evaluator","text":"

      Returns 0 if preferred is true and 1 otherwise.

      pref(verbosity=normal)\n
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#sum_evaluator","title":"Sum evaluator","text":"

      Calculates the sum of the sub-evaluators.

      sum(evals, verbosity=normal)\n
      • evals (list of Evaluator): at least one evaluator
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#weighted_evaluator","title":"Weighted evaluator","text":"

      Multiplies the value of the evaluator with the given weight.

      weight(eval, weight, verbosity=normal)\n
      • eval (Evaluator): evaluator
      • weight (int): weight
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"Evaluator/#cost_partitioning_heuristics","title":"Cost Partitioning Heuristics","text":""},{"location":"Evaluator/#canonical_heuristic_over_abstractions","title":"Canonical heuristic over abstractions","text":"

      Shuffle abstractions randomly.

      canonical_heuristic(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • abstractions (list of AbstractionGenerator): abstraction generators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#greedy_zero-one_cost_partitioning","title":"Greedy zero-one cost partitioning","text":"
      gzocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)\n
      • abstractions (list of AbstractionGenerator): abstraction generators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • orders (OrderGenerator): order generator
      • max_orders (int [0, infinity]): maximum number of orders
      • max_size (int [0, infinity]): maximum heuristic size in KiB
      • max_time (double [0, infinity]): maximum time in seconds for finding orders
      • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
      • samples (int [1, infinity]): number of samples for diversification
      • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#maximum_over_abstractions","title":"Maximum over abstractions","text":"

      Maximize over a set of abstraction heuristics.

      maximize(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • abstractions (list of AbstractionGenerator): abstraction generators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#optimal_cost_partitioning_heuristic","title":"Optimal cost partitioning heuristic","text":"

      Compute an optimal cost partitioning for each evaluated state.

      ocp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, lpsolver=cplex, allow_negative_costs=true)\n
      • abstractions (list of AbstractionGenerator): abstraction generators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • allow_negative_costs (bool): use general instead of non-negative cost partitioning

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#post-hoc_optimization_heuristic","title":"Post-hoc optimization heuristic","text":"

      Compute the maximum over multiple PhO heuristics precomputed offline.

      pho(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturated=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, lpsolver=cplex)\n
      • abstractions (list of AbstractionGenerator): abstraction generators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • saturated (bool): saturate costs
      • orders (OrderGenerator): order generator
      • max_orders (int [0, infinity]): maximum number of orders
      • max_size (int [0, infinity]): maximum heuristic size in KiB
      • max_time (double [0, infinity]): maximum time in seconds for finding orders
      • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
      • samples (int [1, infinity]): number of samples for diversification
      • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#saturated_cost_partitioning","title":"Saturated cost partitioning","text":"

      Compute the maximum over multiple saturated cost partitioning heuristics using different orders. For details, see

      • Jendrik Seipp, Thomas Keller and Malte Helmert. Saturated Cost Partitioning for Optimal Classical Planning. Journal of Artificial Intelligence Research 67:129-167. 2020.

        scp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1)

      • abstractions (list of AbstractionGenerator): abstraction generators

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • saturator ({all, perim, perimstar}): function that computes saturated cost functions
        • all: preserve estimates of all states
        • perim: preserve estimates of states in perimeter around goal
        • perimstar: compute 'perim' first and then 'all' with remaining costs
      • orders (OrderGenerator): order generator
      • max_orders (int [0, infinity]): maximum number of orders
      • max_size (int [0, infinity]): maximum heuristic size in KiB
      • max_time (double [0, infinity]): maximum time in seconds for finding orders
      • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
      • samples (int [1, infinity]): number of samples for diversification
      • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      Difference to cegar(): The cegar() plugin computes a single saturated cost partitioning over Cartesian abstraction heuristics. In contrast, saturated_cost_partitioning() supports computing the maximum over multiple saturated cost partitionings using different heuristic orders, and it supports both Cartesian abstraction heuristics and pattern database heuristics. While cegar() interleaves abstraction computation with cost partitioning, saturated_cost_partitioning() computes all abstractions using the original costs.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#online_saturated_cost_partitioning","title":"Online saturated cost partitioning","text":"

      Compute the maximum over multiple saturated cost partitioning heuristics diversified during the search. For details, see

      • Jendrik Seipp. Online Saturated Cost Partitioning for Classical Planning. In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 317-321. AAAI Press, 2021.

        scp_online(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, saturator=all, orders=greedy_orders(), max_size=infinity, max_time=200, interval=10K, debug=false, random_seed=-1)

      • abstractions (list of AbstractionGenerator): abstraction generators

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • saturator ({all, perim, perimstar}): function that computes saturated cost functions
        • all: preserve estimates of all states
        • perim: preserve estimates of states in perimeter around goal
        • perimstar: compute 'perim' first and then 'all' with remaining costs
      • orders (OrderGenerator): order generator
      • max_size (int [0, infinity]): maximum (estimated) heuristic size in KiB
      • max_time (double [0, infinity]): maximum time in seconds for finding orders
      • interval (int [1, infinity]): select every i-th evaluated state for online diversification
      • debug (bool): print debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: no
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#opportunistic_uniform_cost_partitioning","title":"(Opportunistic) uniform cost partitioning","text":"
      • Jendrik Seipp, Thomas Keller and Malte Helmert. A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning. In Proceedings of the Twenty-Seventh International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 259-268. AAAI Press, 2017.

        ucp(abstractions=[projections(hillclimbing(max_time=60)), projections(systematic(2)), cartesian()], verbosity=normal, transform=no_transform(), cache_estimates=true, orders=greedy_orders(), max_orders=infinity, max_size=infinity, max_time=200, diversify=true, samples=1000, max_optimization_time=2, random_seed=-1, opportunistic=false, debug=false)

      • abstractions (list of AbstractionGenerator): abstraction generators

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • orders (OrderGenerator): order generator
      • max_orders (int [0, infinity]): maximum number of orders
      • max_size (int [0, infinity]): maximum heuristic size in KiB
      • max_time (double [0, infinity]): maximum time in seconds for finding orders
      • diversify (bool): only keep orders that have a higher heuristic value than all previous orders for any of the samples
      • samples (int [1, infinity]): number of samples for diversification
      • max_optimization_time (double [0, infinity]): maximum time in seconds for optimizing each order with hill climbing
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • opportunistic (bool): recalculate uniform cost partitioning after each considered abstraction
      • debug (bool): print debugging messages

      Supported language features:

      • action costs: supported
      • conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)
      • axioms: not supported (the heuristic supports them in theory, but none of the currently implemented abstraction generators do)

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#pattern_database_heuristics","title":"Pattern Database Heuristics","text":""},{"location":"Evaluator/#canonical_pdb","title":"Canonical PDB","text":"

      The canonical pattern database heuristic is calculated as follows. For a given pattern collection C, the value of the canonical heuristic function is the maximum over all maximal additive subsets A in C, where the value for one subset S in A is the sum of the heuristic values for all patterns in S for a given state.

      cpdbs(patterns=systematic(1), max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • patterns (PatternCollectionGenerator): pattern generation method
      • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#ipdb","title":"iPDB","text":"

      This approach is a combination of using the Canonical PDB heuristic over patterns computed with the hillclimbing algorithm for pattern generation. It is a short-hand for the command-line option cpdbs(hillclimbing()). Both the heuristic and the pattern generation algorithm are described in the following paper:

      • Patrik Haslum, Adi Botea, Malte Helmert, Blai Bonet and Sven Koenig. Domain-Independent Construction of Pattern Database Heuristics for Cost-Optimal Planning. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 1007-1012. AAAI Press, 2007.

      For implementation notes, see:

      • Silvan Sievers, Manuela Ortlieb and Malte Helmert. Efficient Implementation of Pattern Database Heuristics for Classical Planning. In Proceedings of the Fifth Annual Symposium on Combinatorial Search (SoCS 2012), pp. 105-111. AAAI Press, 2012.

      See also Canonical PDB and Hill climbing for more details.

      ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, max_generated_patterns=infinity, random_seed=-1, max_time_dominance_pruning=infinity, verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • pdb_max_size (int [1, infinity]): maximal number of states per pattern database
      • collection_max_size (int [1, infinity]): maximal number of states in the pattern collection
      • num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection
      • min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection
      • max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.
      • max_generated_patterns (int [0, infinity]): maximum number of generated patterns
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

      Note: This pattern generation method generates patterns optimized for use with the canonical pattern database heuristic.

      "},{"location":"Evaluator/#implementation_notes","title":"Implementation Notes","text":"

      The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

      The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

      The algorithm is basically a local search (hill climbing) which searches the \"pattern neighbourhood\" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section \"pattern construction as search\" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the \"counting approximation\" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

      Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

      The section \"avoiding redundant evaluations\" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#pattern_database_heuristic","title":"Pattern database heuristic","text":"

      TODO

      pdb(pattern=greedy(), verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • pattern (PatternGenerator): pattern generation method
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#zero-one_pdb","title":"Zero-One PDB","text":"

      The zero/one pattern database heuristic is simply the sum of the heuristic values of all patterns in the pattern collection. In contrast to the canonical pattern database heuristic, there is no need to check for additive subsets, because the additivity of the patterns is guaranteed by action cost partitioning. This heuristic uses the most simple form of action cost partitioning, i.e. if an operator affects more than one pattern in the collection, its costs are entirely taken into account for one pattern (the first one which it affects) and set to zero for all other affected patterns.

      zopdbs(patterns=systematic(1), verbosity=normal, transform=no_transform(), cache_estimates=true)\n
      • patterns (PatternCollectionGenerator): pattern generation method
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#potential_heuristics","title":"Potential Heuristics","text":""},{"location":"Evaluator/#potential_heuristic_optimized_for_all_states","title":"Potential heuristic optimized for all states","text":"

      The algorithm is based on

      • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

        all_states_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

      • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#diverse_potential_heuristics","title":"Diverse potential heuristics","text":"

      The algorithm is based on

      • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

        diverse_potentials(num_samples=1000, max_num_heuristics=infinity, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

      • num_samples (int [0, infinity]): Number of states to sample

      • max_num_heuristics (int [0, infinity]): maximum number of potential heuristics
      • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#potential_heuristic_optimized_for_initial_state","title":"Potential heuristic optimized for initial state","text":"

      The algorithm is based on

      • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

        initial_state_potential(max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true)

      • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.

      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"Evaluator/#sample-based_potential_heuristics","title":"Sample-based potential heuristics","text":"

      Maximum over multiple potential heuristics optimized for samples. The algorithm is based on

      • Jendrik Seipp, Florian Pommerening and Malte Helmert. New Optimization Functions for Potential Heuristics. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. AAAI Press, 2015.

        sample_based_potentials(num_heuristics=1, num_samples=1000, max_potential=1e8, lpsolver=cplex, verbosity=normal, transform=no_transform(), cache_estimates=true, random_seed=-1)

      • num_heuristics (int [0, infinity]): number of potential heuristics

      • num_samples (int [0, infinity]): Number of states to sample
      • max_potential (double [0.0, infinity]): Bound potentials by this number. Using the bound infinity disables the bounds. In some domains this makes the computation of weights unbounded in which case no weights can be extracted. Using very high weights can cause numerical instability in the LP solver, while using very low weights limits the choice of potential heuristics. For details, see the ICAPS paper cited above.
      • lpsolver ({cplex, soplex}): external solver that should be used to solve linear programs
        • cplex: commercial solver by IBM
        • soplex: open source solver by ZIB
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • transform (AbstractTask): Optional task transformation for the heuristic. Currently, adapt_costs() and no_transform() are available.
      • cache_estimates (bool): cache heuristic estimates
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      Note: to use an LP solver, you must build the planner with LP support. See build instructions.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported

      Properties:

      • admissible: yes
      • consistent: yes
      • safe: yes
      • preferred operators: no
      "},{"location":"LabelReduction/","title":"LabelReduction","text":"

      This page describes the current single 'option' for label reduction.

      "},{"location":"LabelReduction/#exact_generalized_label_reduction","title":"Exact generalized label reduction","text":"

      This class implements the exact generalized label reduction described in the following paper:

      • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

        exact(before_shrinking, before_merging, method=all_transition_systems_with_fixpoint, system_order=random, random_seed=-1)

      • before_shrinking (bool): apply label reduction before shrinking

      • before_merging (bool): apply label reduction before merging
      • method ({two_transition_systems, all_transition_systems, all_transition_systems_with_fixpoint}): Label reduction method. See the AAAI14 paper by Sievers et al. for explanation of the default label reduction method and the 'combinable relation' .Also note that you must set at least one of the options reduce_labels_before_shrinking or reduce_labels_before_merging in order to use the chosen label reduction configuration.
        • two_transition_systems: compute the 'combinable relation' only for the two transition systems being merged next
        • all_transition_systems: compute the 'combinable relation' for labels once for every transition system and reduce labels
        • all_transition_systems_with_fixpoint: keep computing the 'combinable relation' for labels iteratively for all transition systems until no more labels can be reduced
      • system_order ({regular, reverse, random}): Order of transition systems for the label reduction methods that iterate over the set of all transition systems. Only useful for the choices all_transition_systems and all_transition_systems_with_fixpoint for the option label_reduction_method.
        • regular: transition systems are considered in the order given in the planner input if atomic and in the order of their creation if composite.
        • reverse: inverse of regular
        • random: random order
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"LandmarkFactory/","title":"LandmarkFactory","text":"

      A landmark factory specification is either a newly created instance or a landmark factory that has been defined previously. This page describes how one can specify a new landmark factory instance. For re-using landmark factories, see OptionSyntax#Landmark_Predefinitions.

      This feature type can be bound to variables using let(variable_name, variable_definition, expression) where expression can use variable_name. Predefinitions using --evaluator, --heuristic, and --landmarks are automatically transformed into let-expressions but are deprecated.

      "},{"location":"LandmarkFactory/#exhaustive_landmarks","title":"Exhaustive Landmarks","text":"

      Exhaustively checks for each fact if it is a landmark.This check is done using relaxed planning.

      lm_exhaust(verbosity=normal, only_causal_landmarks=false)\n
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • only_causal_landmarks (bool): keep only causal landmarks

      Supported language features:

      • conditional_effects: ignored, i.e. not supported
      "},{"location":"LandmarkFactory/#hm_landmarks","title":"h^m Landmarks","text":"

      The landmark generation method introduced by Keyder, Richter & Helmert (ECAI 2010).

      lm_hm(m=2, conjunctive_landmarks=true, verbosity=normal, use_orders=true)\n
      • m (int): subset size (if unsure, use the default of 2)
      • conjunctive_landmarks (bool): keep conjunctive landmarks
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • use_orders (bool): use orders between landmarks

      Supported language features:

      • conditional_effects: ignored, i.e. not supported
      "},{"location":"LandmarkFactory/#merged_landmarks","title":"Merged Landmarks","text":"

      Merges the landmarks and orderings from the parameter landmarks

      lm_merged(lm_factories, verbosity=normal)\n
      • lm_factories (list of LandmarkFactory):
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Precedence: Fact landmarks take precedence over disjunctive landmarks, orderings take precedence in the usual manner (gn > nat > reas > o_reas).

      Note: Does not currently support conjunctive landmarks

      Supported language features:

      • conditional_effects: supported if all components support them
      "},{"location":"LandmarkFactory/#hps_orders","title":"HPS Orders","text":"

      Adds reasonable orders described in the following paper

      • J\u00f6rg Hoffmann, Julie Porteous and Laura Sebastia. Ordered Landmarks in Planning. Journal of Artificial Intelligence Research 22:215-278. 2004.

        lm_reasonable_orders_hps(lm_factory, verbosity=normal)

      • lm_factory (LandmarkFactory):

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Obedient-reasonable orders: Hoffmann et al. (2004) suggest obedient-reasonable orders in addition to reasonable orders. Obedient-reasonable orders were later also used by the LAMA planner (Richter and Westphal, 2010). They are \"reasonable orders\" under the assumption that all (non-obedient) reasonable orders are actually \"natural\", i.e., every plan obeys the reasonable orders. We observed experimentally that obedient-reasonable orders have minimal effect on the performance of LAMA (B\u00fcchner et al., 2023) and decided to remove them in issue1089.

      Supported language features:

      • conditional_effects: supported if subcomponent supports them
      "},{"location":"LandmarkFactory/#rhw_landmarks","title":"RHW Landmarks","text":"

      The landmark generation method introduced by Richter, Helmert and Westphal (AAAI 2008).

      lm_rhw(disjunctive_landmarks=true, verbosity=normal, use_orders=true, only_causal_landmarks=false)\n
      • disjunctive_landmarks (bool): keep disjunctive landmarks
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • use_orders (bool): use orders between landmarks
      • only_causal_landmarks (bool): keep only causal landmarks

      Supported language features:

      • conditional_effects: supported
      "},{"location":"LandmarkFactory/#zhugivan_landmarks","title":"Zhu/Givan Landmarks","text":"

      The landmark generation method introduced by Zhu & Givan (ICAPS 2003 Doctoral Consortium).

      lm_zg(verbosity=normal, use_orders=true)\n
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • use_orders (bool): use orders between landmarks

      Supported language features:

      • conditional_effects: We think they are supported, but this is not 100% sure.
      "},{"location":"MergeScoringFunction/","title":"MergeScoringFunction","text":"

      This page describes various merge scoring functions. A scoring function, given a list of merge candidates and a factored transition system, computes a score for each candidate based on this information and potentially some chosen options. Minimal scores are considered best. Scoring functions are currently only used within the score based filtering merge selector.

      "},{"location":"MergeScoringFunction/#dfp_scoring","title":"DFP scoring","text":"

      This scoring function computes the 'DFP' score as descrdibed in the paper \"Directed model checking with distance-preserving abstractions\" by Draeger, Finkbeiner and Podelski (SPIN 2006), adapted to planning in the following paper:

      • Silvan Sievers, Martin Wehrle and Malte Helmert. Generalized Label Reduction for Merge-and-Shrink Heuristics. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. AAAI Press, 2014.

        dfp()

      Note: To obtain the configurations called DFP-B-50K described in the paper, use the following configuration of the merge-and-shrink heuristic and adapt the tie-breaking criteria of total_order as desired:

      merge_and_shrink(merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order(atomic_ts_order=reverse_level,product_ts_order=new_to_old,atomic_before_product=true)])),shrink_strategy=shrink_bisimulation(greedy=false),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50000,threshold_before_merge=1)\n
      "},{"location":"MergeScoringFunction/#goal_relevance_scoring","title":"Goal relevance scoring","text":"

      This scoring function assigns a merge candidate a value of 0 iff at least one of the two transition systems of the merge candidate is goal relevant in the sense that there is an abstract non-goal state. All other candidates get a score of positive infinity.

      goal_relevance()\n
      "},{"location":"MergeScoringFunction/#miasm","title":"MIASM","text":"

      This scoring function favors merging transition systems such that in their product, there are many dead states, which can then be pruned without sacrificing information. In particular, the score it assigns to a product is the ratio of alive states to the total number of states. To compute this score, this class thus computes the product of all pairs of transition systems, potentially copying and shrinking the transition systems before if otherwise their product would exceed the specified size limits. A stateless merge strategy using this scoring function is called dyn-MIASM (nowadays also called sbMIASM for score-based MIASM) and is described in the following paper:

      • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Planning and Scheduling (ICAPS 2016), pp. 2358-2366. AAAI Press, 2016.

        sf_miasm(shrink_strategy, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, use_caching=true)

      • shrink_strategy (ShrinkStrategy): We recommend setting this to match the shrink strategy configuration given to merge_and_shrink, see note below.

      • max_states (int [-1, infinity]): maximum transition system size allowed at any time point.
      • max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.
      • threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.
      • use_caching (bool): Cache scores for merge candidates. IMPORTANT! This only works under the assumption that the merge-and-shrink algorithm only uses exact label reduction and does not (non-exactly) shrink factors other than those being merged in the current iteration. In this setting, the MIASM score of a merge candidate is constant over merge-and-shrink iterations. If caching is enabled, only the scores for the new merge candidates need to be computed.

      Note: To obtain the configurations called dyn-MIASM described in the paper, use the following configuration of the merge-and-shrink heuristic and adapt the tie-breaking criteria of total_order as desired:

      merge_and_shrink(merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[sf_miasm(shrink_strategy=shrink_bisimulation(greedy=false),max_states=50000,threshold_before_merge=1),total_order(atomic_ts_order=reverse_level,product_ts_order=new_to_old,atomic_before_product=true)])),shrink_strategy=shrink_bisimulation(greedy=false),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50000,threshold_before_merge=1)\n

      Note: Unless you know what you are doing, we recommend using the same options related to shrinking for sf_miasm as for merge_and_shrink, i.e. the options shrink_strategy, max_states, and threshold_before_merge should be set identically. Furthermore, as this scoring function maximizes the amount of possible pruning, merge-and-shrink should be configured to use full pruning, i.e. prune_unreachable_states=true and prune_irrelevant_states=true (the default).

      "},{"location":"MergeScoringFunction/#single_random","title":"Single random","text":"

      This scoring function assigns exactly one merge candidate a score of 0, chosen randomly, and infinity to all others.

      single_random(random_seed=-1)\n
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"MergeScoringFunction/#total_order","title":"Total order","text":"

      This scoring function computes a total order on the merge candidates, based on the specified options. The score for each merge candidate correponds to its position in the order. This scoring function is mainly intended as tie-breaking, and has been introduced in the following paper:

      • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Automated Planning and Scheduling (ICAPS 2016), pp. 294-298. AAAI Press, 2016.

      Furthermore, using the atomic_ts_order option, this scoring function, if used alone in a score based filtering merge selector, can be used to emulate the corresponding (precomputed) linear merge strategies reverse level/level (independently of the other options).

      total_order(atomic_ts_order=reverse_level, product_ts_order=new_to_old, atomic_before_product=false, random_seed=-1)\n
      • atomic_ts_order ({reverse_level, level, random}): The order in which atomic transition systems are considered when considering pairs of potential merges.
        • reverse_level: the variable order of Fast Downward
        • level: opposite of reverse_level
        • random: a randomized order
      • product_ts_order ({old_to_new, new_to_old, random}): The order in which product transition systems are considered when considering pairs of potential merges.
        • old_to_new: consider composite transition systems from oldest to most recent
        • new_to_old: opposite of old_to_new
        • random: a randomized order
      • atomic_before_product (bool): Consider atomic transition systems before composite ones iff true.
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"MergeSelector/","title":"MergeSelector","text":"

      This page describes the available merge selectors. They are used to compute the next merge purely based on the state of the given factored transition system. They are used in the merge strategy of type 'stateless', but they can also easily be used in different 'combined' merged strategies.

      "},{"location":"MergeSelector/#score_based_filtering_merge_selector","title":"Score based filtering merge selector","text":"

      This merge selector has a list of scoring functions, which are used iteratively to compute scores for merge candidates, keeping the best ones (with minimal scores) until only one is left.

      score_based_filtering(scoring_functions)\n
      • scoring_functions (list of MergeScoringFunction): The list of scoring functions used to compute scores for candidates.
      "},{"location":"MergeStrategy/","title":"MergeStrategy","text":"

      This page describes the various merge strategies supported by the planner.

      "},{"location":"MergeStrategy/#precomputed_merge_strategy","title":"Precomputed merge strategy","text":"

      This merge strategy has a precomputed merge tree. Note that this merge strategy does not take into account the current state of the factored transition system. This also means that this merge strategy relies on the factored transition system being synchronized with this merge tree, i.e. all merges are performed exactly as given by the merge tree.

      merge_precomputed(merge_tree, verbosity=normal)\n
      • merge_tree (MergeTree): The precomputed merge tree.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note: An example of a precomputed merge startegy is a linear merge strategy, which can be obtained using:

      merge_strategy=merge_precomputed(merge_tree=linear(<variable_order>))\n
      "},{"location":"MergeStrategy/#merge_strategy_sscs","title":"Merge strategy SSCs","text":"

      This merge strategy implements the algorithm described in the paper

      • Silvan Sievers, Martin Wehrle and Malte Helmert. An Analysis of Merge Strategies for Merge-and-Shrink Heuristics. In Proceedings of the 26th International Conference on Planning and Scheduling (ICAPS 2016), pp. 2358-2366. AAAI Press, 2016.

      In a nutshell, it computes the maximal SCCs of the causal graph, obtaining a partitioning of the task's variables. Every such partition is then merged individually, using the specified fallback merge strategy, considering the SCCs in a configurable order. Afterwards, all resulting composite abstractions are merged to form the final abstraction, again using the specified fallback merge strategy and the configurable order of the SCCs.

      merge_sccs(order_of_sccs=topological, merge_tree=<none>, merge_selector=<none>, verbosity=normal)\n
      • order_of_sccs ({topological, reverse_topological, decreasing, increasing}): how the SCCs should be ordered
        • topological: according to the topological ordering of the directed graph where each obtained SCC is a 'supervertex'
        • reverse_topological: according to the reverse topological ordering of the directed graph where each obtained SCC is a 'supervertex'
        • decreasing: biggest SCCs first, using 'topological' as tie-breaker
        • increasing: smallest SCCs first, using 'topological' as tie-breaker
      • merge_tree (MergeTree): the fallback merge strategy to use if a precomputed strategy should be used.
      • merge_selector (MergeSelector): the fallback merge strategy to use if a stateless strategy should be used.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"MergeStrategy/#stateless_merge_strategy","title":"Stateless merge strategy","text":"

      This merge strategy has a merge selector, which computes the next merge only depending on the current state of the factored transition system, not requiring any additional information.

      merge_stateless(merge_selector, verbosity=normal)\n
      • merge_selector (MergeSelector): The merge selector to be used.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note: Examples include the DFP merge strategy, which can be obtained using:

      merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order(<order_option>))]))\n

      and the (dynamic/score-based) MIASM strategy, which can be obtained using:

      merge_strategy=merge_stateless(merge_selector=score_based_filtering(scoring_functions=[sf_miasm(<shrinking_options>),total_order(<order_option>)]\n
      "},{"location":"MergeTree/","title":"MergeTree","text":"

      This page describes the available merge trees that can be used to precompute a merge strategy, either for the entire task or a given subset of transition systems of a given factored transition system. Merge trees are typically used in the merge strategy of type 'precomputed', but they can also be used as fallback merge strategies in 'combined' merge strategies.

      "},{"location":"MergeTree/#linear_merge_trees","title":"Linear merge trees","text":"

      These merge trees implement several linear merge orders, which are described in the paper:

      • Malte Helmert, Patrik Haslum and Joerg Hoffmann. Flexible Abstraction Heuristics for Optimal Sequential Planning. In Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS 2007), pp. 176-183. AAAI Press, 2007.

        linear(random_seed=-1, update_option=use_random, variable_order=cg_goal_level)

      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      • update_option ({use_first, use_second, use_random}): When the merge tree is used within another merge strategy, how should it be updated when a merge different to a merge from the tree is performed.
        • use_first: the node representing the index that would have been merged earlier survives
        • use_second: the node representing the index that would have been merged later survives
        • use_random: a random node (of the above two) survives
      • variable_order ({cg_goal_level, cg_goal_random, goal_cg_level, random, level, reverse_level}): the order in which atomic transition systems are merged
        • cg_goal_level: variables are prioritized first if they have an arc to a previously added variable, second if their goal value is defined and third according to their level in the causal graph
        • cg_goal_random: variables are prioritized first if they have an arc to a previously added variable, second if their goal value is defined and third randomly
        • goal_cg_level: variables are prioritized first if their goal value is defined, second if they have an arc to a previously added variable, and third according to their level in the causal graph
        • random: variables are ordered randomly
        • level: variables are ordered according to their level in the causal graph
        • reverse_level: variables are ordered reverse to their level in the causal graph
      "},{"location":"OpenList/","title":"OpenList","text":""},{"location":"OpenList/#alternation_open_list","title":"Alternation open list","text":"

      alternates between several open lists.

      alt(sublists, boost=0)\n
      • sublists (list of OpenList): open lists between which this one alternates
      • boost (int): boost value for contained open lists that are restricted to preferred successors
      "},{"location":"OpenList/#epsilon-greedy_open_list","title":"Epsilon-greedy open list","text":"

      Chooses an entry uniformly randomly with probability 'epsilon', otherwise it returns the minimum entry. The algorithm is based on

      • Richard Valenzano, Nathan R. Sturtevant, Jonathan Schaeffer and Fan Xie. A Comparison of Knowledge-Based GBFS Enhancements and Knowledge-Free Exploration. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 375-379. AAAI Press, 2014.

        epsilon_greedy(eval, pref_only=false, epsilon=0.2, random_seed=-1)

      • eval (Evaluator): evaluator

      • pref_only (bool): insert only nodes generated by preferred operators
      • epsilon (double [0.0, 1.0]): probability for choosing the next entry randomly
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"OpenList/#pareto_open_list","title":"Pareto open list","text":"

      Selects one of the Pareto-optimal (regarding the sub-evaluators) entries for removal.

      pareto(evals, pref_only=false, state_uniform_selection=false, random_seed=-1)\n
      • evals (list of Evaluator): evaluators
      • pref_only (bool): insert only nodes generated by preferred operators
      • state_uniform_selection (bool): When removing an entry, we select a non-dominated bucket and return its oldest entry. If this option is false, we select uniformly from the non-dominated buckets; if the option is true, we weight the buckets with the number of entries.
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"OpenList/#best-first_open_list","title":"Best-first open list","text":"

      Open list that uses a single evaluator and FIFO tiebreaking.

      single(eval, pref_only=false)\n
      • eval (Evaluator): evaluator
      • pref_only (bool): insert only nodes generated by preferred operators

      Implementation Notes: Elements with the same evaluator value are stored in double-ended queues, called \"buckets\". The open list stores a map from evaluator values to buckets. Pushing and popping from a bucket runs in constant time. Therefore, inserting and removing an entry from the open list takes time O(log(n)), where n is the number of buckets.

      "},{"location":"OpenList/#tie-breaking_open_list","title":"Tie-breaking open list","text":"
      tiebreaking(evals, pref_only=false, unsafe_pruning=true)\n
      • evals (list of Evaluator): evaluators
      • pref_only (bool): insert only nodes generated by preferred operators
      • unsafe_pruning (bool): allow unsafe pruning when the main evaluator regards a state a dead end
      "},{"location":"OpenList/#type-based_open_list","title":"Type-based open list","text":"

      Uses multiple evaluators to assign entries to buckets. All entries in a bucket have the same evaluator values. When retrieving an entry, a bucket is chosen uniformly at random and one of the contained entries is selected uniformly randomly. The algorithm is based on

      • Fan Xie, Martin Mueller, Robert Holte and Tatsuya Imai. Type-Based Exploration with Multiple Search Queues for Satisficing Planning. In Proceedings of the Twenty-Eigth AAAI Conference Conference on Artificial Intelligence (AAAI 2014), pp. 2395-2401. AAAI Press, 2014.

        type_based(evaluators, random_seed=-1)

      • evaluators (list of Evaluator): Evaluators used to determine the bucket for each entry.

      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"OrderGenerator/","title":"OrderGenerator","text":"

      Order abstractions for saturated cost partitioning.

      "},{"location":"OrderGenerator/#dynamic_greedy_orders","title":"Dynamic greedy orders","text":"

      Order abstractions greedily by a given scoring function, dynamically recomputing the next best abstraction after each ordering step.

      dynamic_greedy_orders(scoring_function=max_heuristic_per_stolen_costs, random_seed=-1)\n
      • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
        • max_heuristic: order by decreasing heuristic value for the given state
        • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
        • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"OrderGenerator/#cost_partitioning_heuristics","title":"Cost Partitioning Heuristics","text":""},{"location":"OrderGenerator/#greedy_orders","title":"Greedy orders","text":"

      Order abstractions greedily by a given scoring function.

      greedy_orders(scoring_function=max_heuristic_per_stolen_costs, random_seed=-1)\n
      • scoring_function ({max_heuristic, min_stolen_costs, max_heuristic_per_stolen_costs}): metric for ordering abstractions/landmarks
        • max_heuristic: order by decreasing heuristic value for the given state
        • min_stolen_costs: order by increasing sum of costs stolen from other heuristics
        • max_heuristic_per_stolen_costs: order by decreasing ratio of heuristic value divided by sum of stolen costs
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"OrderGenerator/#random_orders","title":"Random orders","text":"

      Shuffle abstractions randomly.

      random_orders(random_seed=-1)\n
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"PatternCollectionGenerator/","title":"PatternCollectionGenerator","text":"

      Factory for pattern collections

      "},{"location":"PatternCollectionGenerator/#combo","title":"combo","text":"
      combo(max_states=1000000, verbosity=normal)\n
      • max_states (int [1, infinity]): maximum abstraction size for combo strategy
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternCollectionGenerator/#disjoint_cegar","title":"Disjoint CEGAR","text":"

      This pattern collection generator uses the CEGAR algorithm to compute a pattern for the planning task. See below for a description of the algorithm and some implementation notes. The original algorithm (called single CEGAR) is described in the paper

      • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

        disjoint_cegar(max_pdb_size=1000000, max_collection_size=10000000, max_time=infinity, use_wildcard_plans=true, verbosity=normal, random_seed=-1)

      • max_pdb_size (int [1, infinity]): maximum number of states per pattern database (ignored for the initial collection consisting of a singleton pattern for each goal variable)

      • max_collection_size (int [1, infinity]): maximum number of states in the pattern collection (ignored for the initial collection consisting of a singleton pattern for each goal variable)
      • max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator (ignored for computing the initial collection consisting of a singleton pattern for each goal variable)
      • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"PatternCollectionGenerator/#short_description_of_the_cegar_algorithm","title":"Short description of the CEGAR algorithm","text":"

      The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

      "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_cegar_algorithm","title":"Implementation notes about the CEGAR algorithm","text":"

      The following describes differences of the implementation to the original implementation used and described in the paper.

      Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

      In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

      Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

      Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

      "},{"location":"PatternCollectionGenerator/#genetic_algorithm_patterns","title":"Genetic Algorithm Patterns","text":"

      The following paper describes the automated creation of pattern databases with a genetic algorithm. Pattern collections are initially created with a bin-packing algorithm. The genetic algorithm is used to optimize the pattern collections with an objective function that estimates the mean heuristic value of the the pattern collections. Pattern collections with higher mean heuristic estimates are more likely selected for the next generation.

      • Stefan Edelkamp. Automated Creation of Pattern Database Search Heuristics. In Proceedings of the 4th Workshop on Model Checking and Artificial Intelligence (!MoChArt 2006), pp. 35-50. AAAI Press, 2007.

        genetic(pdb_max_size=50000, num_collections=5, num_episodes=30, mutation_probability=0.01, disjoint=false, random_seed=-1, verbosity=normal)

      • pdb_max_size (int [1, infinity]): maximal number of states per pattern database

      • num_collections (int [1, infinity]): number of pattern collections to maintain in the genetic algorithm (population size)
      • num_episodes (int [0, infinity]): number of episodes for the genetic algorithm
      • mutation_probability (double [0.0, 1.0]): probability for flipping a bit in the genetic algorithm
      • disjoint (bool): consider a pattern collection invalid (giving it very low fitness) if its patterns are not disjoint
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note: This pattern generation method uses the zero/one pattern database heuristic.

      "},{"location":"PatternCollectionGenerator/#implementation_notes","title":"Implementation Notes","text":"

      The standard genetic algorithm procedure as described in the paper is implemented in Fast Downward. The implementation is close to the paper.

      • InitializationIn Fast Downward bin-packing with the next-fit strategy is used. A bin corresponds to a pattern which contains variables up to pdb_max_size. With this method each variable occurs exactly in one pattern of a collection. There are num_collections collections created.
      • MutationWith probability mutation_probability a bit is flipped meaning that either a variable is added to a pattern or deleted from a pattern.
      • RecombinationRecombination isn't implemented in Fast Downward. In the paper recombination is described but not used.
      • EvaluationFor each pattern collection the mean heuristic value is computed. For a single pattern database the mean heuristic value is the sum of all pattern database entries divided through the number of entries. Entries with infinite heuristic values are ignored in this calculation. The sum of these individual mean heuristic values yield the mean heuristic value of the collection.
      • SelectionThe higher the mean heuristic value of a pattern collection is, the more likely this pattern collection should be selected for the next generation. Therefore the mean heuristic values are normalized and converted into probabilities and Roulette Wheel Selection is used.

      Supported language features:

      • action costs: supported
      • conditional effects: not supported
      • axioms: not supported
      "},{"location":"PatternCollectionGenerator/#hill_climbing","title":"Hill climbing","text":"

      This algorithm uses hill climbing to generate patterns optimized for the Canonical PDB heuristic. It it described in the following paper:

      • Patrik Haslum, Adi Botea, Malte Helmert, Blai Bonet and Sven Koenig. Domain-Independent Construction of Pattern Database Heuristics for Cost-Optimal Planning. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 1007-1012. AAAI Press, 2007.

      For implementation notes, see:

      • Silvan Sievers, Manuela Ortlieb and Malte Helmert. Efficient Implementation of Pattern Database Heuristics for Classical Planning. In Proceedings of the Fifth Annual Symposium on Combinatorial Search (SoCS 2012), pp. 105-111. AAAI Press, 2012.

        hillclimbing(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, max_generated_patterns=infinity, random_seed=-1, verbosity=normal)

      • pdb_max_size (int [1, infinity]): maximal number of states per pattern database

      • collection_max_size (int [1, infinity]): maximal number of states in the pattern collection
      • num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection
      • min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection
      • max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.
      • max_generated_patterns (int [0, infinity]): maximum number of generated patterns
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

      Note: This pattern generation method generates patterns optimized for use with the canonical pattern database heuristic.

      "},{"location":"PatternCollectionGenerator/#implementation_notes_1","title":"Implementation Notes","text":"

      The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

      The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

      The algorithm is basically a local search (hill climbing) which searches the \"pattern neighbourhood\" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section \"pattern construction as search\" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the \"counting approximation\" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

      Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

      The section \"avoiding redundant evaluations\" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

      "},{"location":"PatternCollectionGenerator/#manual_patterns","title":"manual_patterns","text":"
      manual_patterns(patterns, verbosity=normal)\n
      • patterns (list of list of int): list of patterns (which are lists of variable numbers of the planning task).
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternCollectionGenerator/#multiple_cegar","title":"Multiple CEGAR","text":"

      This pattern collection generator implements the multiple CEGAR algorithm described in the paper

      • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

      It is an instantiation of the 'multiple algorithm framework'. To compute a pattern in each iteration, it uses the CEGAR algorithm restricted to a single goal variable. See below for descriptions of the algorithms.

      multiple_cegar(max_pdb_size=1M, max_collection_size=10M, pattern_generation_max_time=infinity, total_max_time=100.0, stagnation_limit=20.0, blacklist_trigger_percentage=0.75, enable_blacklist_on_stagnation=true, verbosity=normal, random_seed=-1, use_wildcard_plans=true)\n
      • max_pdb_size (int [1, infinity]): maximum number of states for each pattern database, computed by compute_pattern (possibly ignored by singleton patterns consisting of a goal variable)
      • max_collection_size (int [1, infinity]): maximum number of states in all pattern databases of the collection (possibly ignored, see max_pdb_size)
      • pattern_generation_max_time (double [0.0, infinity]): maximum time in seconds for each call to the algorithm for computing a single pattern
      • total_max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator. It will always execute at least one iteration, i.e., call the algorithm for computing a single pattern at least once.
      • stagnation_limit (double [1.0, infinity]): maximum time in seconds this pattern generator is allowed to run without generating a new pattern. It terminates prematurely if this limit is hit unless enable_blacklist_on_stagnation is enabled.
      • blacklist_trigger_percentage (double [0.0, 1.0]): percentage of total_max_time after which blacklisting is enabled
      • enable_blacklist_on_stagnation (bool): if true, blacklisting is enabled when stagnation_limit is hit for the first time (unless it was already enabled due to blacklist_trigger_percentage) and pattern generation is terminated when stagnation_limit is hit for the second time. If false, pattern generation is terminated already the first time stagnation_limit is hit.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
      "},{"location":"PatternCollectionGenerator/#short_description_of_the_cegar_algorithm_1","title":"Short description of the CEGAR algorithm","text":"

      The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

      "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_cegar_algorithm_1","title":"Implementation notes about the CEGAR algorithm","text":"

      The following describes differences of the implementation to the original implementation used and described in the paper.

      Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

      In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

      Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

      Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

      "},{"location":"PatternCollectionGenerator/#short_description_of_the_multiple_algorithm_framework","title":"Short description of the 'multiple algorithm framework'","text":"

      This algorithm is a general framework for computing a pattern collection for a given planning task. It requires as input a method for computing a single pattern for the given task and a single goal of the task. The algorithm works as follows. It first stores the goals of the task in random order. Then, it repeatedly iterates over all goals and for each goal, it uses the given method for computing a single pattern. If the pattern is new (duplicate detection), it is kept for the final collection. The algorithm runs until reaching a given time limit. Another parameter allows exiting early if no new patterns are found for a certain time ('stagnation'). Further parameters allow enabling blacklisting for the given pattern computation method after a certain time to force some diversification or to enable said blacklisting when stagnating.

      "},{"location":"PatternCollectionGenerator/#implementation_note_about_the_multiple_algorithm_framework","title":"Implementation note about the 'multiple algorithm framework'","text":"

      A difference compared to the original implementation used in the paper is that the original implementation of stagnation in the multiple CEGAR/RCG algorithms started counting the time towards stagnation only after having generated a duplicate pattern. Now, time towards stagnation starts counting from the start and is reset to the current time only when having found a new pattern or when enabling blacklisting.

      "},{"location":"PatternCollectionGenerator/#multiple_random_patterns","title":"Multiple Random Patterns","text":"

      This pattern collection generator implements the 'multiple randomized causal graph' (mRCG) algorithm described in experiments of the paper

      • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

      It is an instantiation of the 'multiple algorithm framework'. To compute a pattern in each iteration, it uses the random pattern algorithm, called 'single randomized causal graph' (sRCG) in the paper. See below for descriptions of the algorithms.

      random_patterns(max_pdb_size=1M, max_collection_size=10M, pattern_generation_max_time=infinity, total_max_time=100.0, stagnation_limit=20.0, blacklist_trigger_percentage=0.75, enable_blacklist_on_stagnation=true, verbosity=normal, random_seed=-1, bidirectional=true)\n
      • max_pdb_size (int [1, infinity]): maximum number of states for each pattern database, computed by compute_pattern (possibly ignored by singleton patterns consisting of a goal variable)
      • max_collection_size (int [1, infinity]): maximum number of states in all pattern databases of the collection (possibly ignored, see max_pdb_size)
      • pattern_generation_max_time (double [0.0, infinity]): maximum time in seconds for each call to the algorithm for computing a single pattern
      • total_max_time (double [0.0, infinity]): maximum time in seconds for this pattern collection generator. It will always execute at least one iteration, i.e., call the algorithm for computing a single pattern at least once.
      • stagnation_limit (double [1.0, infinity]): maximum time in seconds this pattern generator is allowed to run without generating a new pattern. It terminates prematurely if this limit is hit unless enable_blacklist_on_stagnation is enabled.
      • blacklist_trigger_percentage (double [0.0, 1.0]): percentage of total_max_time after which blacklisting is enabled
      • enable_blacklist_on_stagnation (bool): if true, blacklisting is enabled when stagnation_limit is hit for the first time (unless it was already enabled due to blacklist_trigger_percentage) and pattern generation is terminated when stagnation_limit is hit for the second time. If false, pattern generation is terminated already the first time stagnation_limit is hit.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • bidirectional (bool): this option decides if the causal graph is considered to be directed or undirected selecting predecessors of already selected variables. If true (default), it is considered to be undirected (precondition-effect edges are bidirectional). If false, it is considered to be directed (a variable is a neighbor only if it is a predecessor.
      "},{"location":"PatternCollectionGenerator/#short_description_of_the_random_pattern_algorithm","title":"Short description of the random pattern algorithm","text":"

      The random pattern algorithm computes a pattern for a given planning task and a single goal of the task as follows. Starting with the given goal variable, the algorithm executes a random walk on the causal graph. In each iteration, it selects a random causal graph neighbor of the current variable. It terminates if no neighbor fits the pattern due to the size limit or if the time limit is reached.

      "},{"location":"PatternCollectionGenerator/#implementation_notes_about_the_random_pattern_algorithm","title":"Implementation notes about the random pattern algorithm","text":"

      In the original implementation used in the paper, the algorithm selected a random neighbor and then checked if selecting it would violate the PDB size limit. If so, the algorithm would not select it and terminate. In the current implementation, the algorithm instead loops over all neighbors of the current variable in random order and selects the first one not violating the PDB size limit. If no such neighbor exists, the algorithm terminates.

      "},{"location":"PatternCollectionGenerator/#short_description_of_the_multiple_algorithm_framework_1","title":"Short description of the 'multiple algorithm framework'","text":"

      This algorithm is a general framework for computing a pattern collection for a given planning task. It requires as input a method for computing a single pattern for the given task and a single goal of the task. The algorithm works as follows. It first stores the goals of the task in random order. Then, it repeatedly iterates over all goals and for each goal, it uses the given method for computing a single pattern. If the pattern is new (duplicate detection), it is kept for the final collection. The algorithm runs until reaching a given time limit. Another parameter allows exiting early if no new patterns are found for a certain time ('stagnation'). Further parameters allow enabling blacklisting for the given pattern computation method after a certain time to force some diversification or to enable said blacklisting when stagnating.

      "},{"location":"PatternCollectionGenerator/#implementation_note_about_the_multiple_algorithm_framework_1","title":"Implementation note about the 'multiple algorithm framework'","text":"

      A difference compared to the original implementation used in the paper is that the original implementation of stagnation in the multiple CEGAR/RCG algorithms started counting the time towards stagnation only after having generated a duplicate pattern. Now, time towards stagnation starts counting from the start and is reset to the current time only when having found a new pattern or when enabling blacklisting.

      "},{"location":"PatternCollectionGenerator/#sys-scp_patterns","title":"Sys-SCP patterns","text":"

      Systematically generate larger (interesting) patterns but only keep a pattern if it's useful under a saturated cost partitioning. For details, see

      • Jendrik Seipp. Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 5621-5627. IJCAI, 2019.

        sys_scp(max_pattern_size=infinity, max_pdb_size=2M, max_collection_size=20M, max_patterns=infinity, max_time=100, max_time_per_restart=10, max_evaluations_per_restart=infinity, max_total_evaluations=infinity, saturate=true, create_complete_transition_system=false, pattern_type=interesting_non_negative, ignore_useless_patterns=false, store_dead_ends=true, order=cg_down, random_seed=-1, verbosity=normal)

      • max_pattern_size (int [1, infinity]): maximum number of variables per pattern

      • max_pdb_size (int [1, infinity]): maximum number of states in a PDB
      • max_collection_size (int [1, infinity]): maximum number of states in the pattern collection
      • max_patterns (int [1, infinity]): maximum number of patterns
      • max_time (double [0.0, infinity]): maximum time in seconds for generating patterns
      • max_time_per_restart (double [0.0, infinity]): maximum time in seconds for each restart
      • max_evaluations_per_restart (int [0, infinity]): maximum pattern evaluations per the inner loop
      • max_total_evaluations (int [0, infinity]): maximum total pattern evaluations
      • saturate (bool): only select patterns useful in saturated cost partitionings
      • create_complete_transition_system (bool): create explicit transition system (necessary for tasks with conditional effects)
      • pattern_type ({naive, interesting_general, interesting_non_negative}): type of patterns
        • naive: all patterns up to the given size
        • interesting_general: only consider the union of two disjoint patterns if the union has more information than the individual patterns under a general cost partitioning
        • interesting_non_negative: like interesting_general, but considering non-negative cost partitioning
      • ignore_useless_patterns (bool): ignore patterns that induce no transitions with positive finite cost
      • store_dead_ends (bool): store dead ends in dead end tree (used to prune the search later)
      • order ({random, states_up, states_down, ops_up, ops_down, cg_up, cg_down}): order in which to consider patterns of the same size (based on states in projection, active operators or position of the pattern variables in the partial ordering of the causal graph)
        • random: order randomly
        • states_up: order by increasing number of abstract states
        • states_down: order by decreasing number of abstract states
        • ops_up: order by increasing number of active operators
        • ops_down: order by decreasing number of active operators
        • cg_up: use lexicographical order
        • cg_down: use reverse lexicographical order
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternCollectionGenerator/#systematically_generated_patterns","title":"Systematically generated patterns","text":"

      Generates all (interesting) patterns with up to pattern_max_size variables. For details, see

      • Florian Pommerening, Gabriele Roeger and Malte Helmert. Getting the Most Out of Pattern Databases for Classical Planning. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2357-2364. AAAI Press, 2013.

      The pattern_type=interesting_general setting was introduced in

      • Florian Pommerening, Thomas Keller, Valentina Halasi, Jendrik Seipp, Silvan Sievers and Malte Helmert. Dantzig-Wolfe Decomposition for Cost Partitioning. In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 271-280. AAAI Press, 2021.

        systematic(pattern_max_size=1, pattern_type=interesting_non_negative, verbosity=normal)

      • pattern_max_size (int [1, infinity]): max number of variables per pattern

      • pattern_type ({naive, interesting_general, interesting_non_negative}): type of patterns
        • naive: all patterns up to the given size
        • interesting_general: only consider the union of two disjoint patterns if the union has more information than the individual patterns under a general cost partitioning
        • interesting_non_negative: like interesting_general, but considering non-negative cost partitioning
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternGenerator/","title":"PatternGenerator","text":"

      Factory for single patterns

      "},{"location":"PatternGenerator/#cegar","title":"CEGAR","text":"

      This pattern generator uses the CEGAR algorithm restricted to a random single goal of the task to compute a pattern. See below for a description of the algorithm and some implementation notes. The original algorithm (called single CEGAR) is described in the paper

      • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

        cegar_pattern(max_pdb_size=1000000, max_time=infinity, use_wildcard_plans=true, verbosity=normal, random_seed=-1)

      • max_pdb_size (int [1, infinity]): maximum number of states in the final pattern database (possibly ignored by a singleton pattern consisting of a single goal variable)

      • max_time (double [0.0, infinity]): maximum time in seconds for the pattern generation
      • use_wildcard_plans (bool): if true, compute wildcard plans which are sequences of sets of operators that induce the same transition; otherwise compute regular plans which are sequences of single operators
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"PatternGenerator/#short_description_of_the_cegar_algorithm","title":"Short description of the CEGAR algorithm","text":"

      The CEGAR algorithm computes a pattern collection for a given planning task and a given (sub)set of its goals in a randomized order as follows. Starting from the pattern collection consisting of a singleton pattern for each goal variable, it repeatedly attempts to execute an optimal plan of each pattern in the concrete task, collects reasons why this is not possible (so-called flaws) and refines the pattern in question by adding a variable to it. Further parameters allow blacklisting a (sub)set of the non-goal variables which are then never added to the collection, limiting PDB and collection size, setting a time limit and switching between computing regular or wildcard plans, where the latter are sequences of parallel operators inducing the same abstract transition.

      "},{"location":"PatternGenerator/#implementation_notes_about_the_cegar_algorithm","title":"Implementation notes about the CEGAR algorithm","text":"

      The following describes differences of the implementation to the original implementation used and described in the paper.

      Conceptually, there is one larger difference which concerns the computation of (regular or wildcard) plans for PDBs. The original implementation used an enforced hill-climbing (EHC) search with the PDB as the perfect heuristic, which ensured finding strongly optimal plans, i.e., optimal plans with a minimum number of zero-cost operators, in domains with zero-cost operators. The original implementation also slightly modified EHC to search for a best-improving successor, chosen uniformly at random among all best-improving successors.

      In contrast, the current implementation computes a plan alongside the computation of the PDB itself. A modification to Dijkstra's algorithm for computing the PDB values stores, for each state, the operator leading to that state (in a regression search). This generating operator is updated only if the algorithm found a cheaper path to the state. After Dijkstra finishes, the plan computation starts at the initial state and iteratively follows the generating operator, computes all operators of the same cost inducing the same transition, until reaching a goal. This constitutes a wildcard plan. It is turned into a regular one by randomly picking a single operator for each transition.

      Note that this kind of plan extraction does not consider all successors of a state uniformly at random but rather uses the previously deterministically chosen generating operator to settle on one successor state, which is biased by the number of operators leading to the same successor from the given state. Further note that in the presence of zero-cost operators, this procedure does not guarantee that the computed plan is strongly optimal because it does not minimize the number of used zero-cost operators leading to the state when choosing a generating operator. Experiments have shown (issue1007) that this speeds up the computation significantly while not having a strongly negative effect on heuristic quality due to potentially computing worse plans.

      Two further changes fix bugs of the original implementation to match the description in the paper. The first bug fix is to raise a flaw for all goal variables of the task if the plan for a PDB can be executed on the concrete task but does not lead to a goal state. Previously, such flaws would not have been raised because all goal variables are part of the collection from the start on and therefore not considered. This means that the original implementation accidentally disallowed merging patterns due to goal violation flaws. The second bug fix is to actually randomize the order of parallel operators in wildcard plan steps.

      "},{"location":"PatternGenerator/#greedy","title":"greedy","text":"
      greedy(max_states=1000000, verbosity=normal)\n
      • max_states (int [1, infinity]): maximal number of abstract states in the pattern database.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternGenerator/#manual_pattern","title":"manual_pattern","text":"
      manual_pattern(pattern, verbosity=normal)\n
      • pattern (list of int): list of variable numbers of the planning task that should be used as pattern.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"PatternGenerator/#random_pattern","title":"Random Pattern","text":"

      This pattern generator implements the 'single randomized causal graph' algorithm described in experiments of the the paper

      • Alexander Rovner, Silvan Sievers and Malte Helmert. Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning. In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. AAAI Press, 2019.

      See below for a description of the algorithm and some implementation notes.

      random_pattern(max_pdb_size=1000000, max_time=infinity, bidirectional=true, verbosity=normal, random_seed=-1)\n
      • max_pdb_size (int [1, infinity]): maximum number of states in the final pattern database (possibly ignored by a singleton pattern consisting of a single goal variable)
      • max_time (double [0.0, infinity]): maximum time in seconds for the pattern generation
      • bidirectional (bool): this option decides if the causal graph is considered to be directed or undirected selecting predecessors of already selected variables. If true (default), it is considered to be undirected (precondition-effect edges are bidirectional). If false, it is considered to be directed (a variable is a neighbor only if it is a predecessor.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"PatternGenerator/#short_description_of_the_random_pattern_algorithm","title":"Short description of the random pattern algorithm","text":"

      The random pattern algorithm computes a pattern for a given planning task and a single goal of the task as follows. Starting with the given goal variable, the algorithm executes a random walk on the causal graph. In each iteration, it selects a random causal graph neighbor of the current variable. It terminates if no neighbor fits the pattern due to the size limit or if the time limit is reached.

      "},{"location":"PatternGenerator/#implementation_notes_about_the_random_pattern_algorithm","title":"Implementation notes about the random pattern algorithm","text":"

      In the original implementation used in the paper, the algorithm selected a random neighbor and then checked if selecting it would violate the PDB size limit. If so, the algorithm would not select it and terminate. In the current implementation, the algorithm instead loops over all neighbors of the current variable in random order and selects the first one not violating the PDB size limit. If no such neighbor exists, the algorithm terminates.

      "},{"location":"PruningMethod/","title":"PruningMethod","text":"

      Prune or reorder applicable operators.

      "},{"location":"PruningMethod/#atom-centric_stubborn_sets","title":"Atom-centric stubborn sets","text":"

      Stubborn sets are a state pruning method which computes a subset of applicable actions in each state such that completeness and optimality of the overall search is preserved. Previous stubborn set implementations mainly track information about actions. In contrast, this implementation focuses on atomic propositions (atoms), which often speeds up the computation on IPC benchmarks. For details, see

      • Gabriele Roeger, Malte Helmert, Jendrik Seipp and Silvan Sievers. An Atom-Centric Perspective on Stubborn Sets. In Proceedings of the 13th Annual Symposium on Combinatorial Search (SoCS 2020), pp. 57-65. AAAI Press, 2020.

        atom_centric_stubborn_sets(use_sibling_shortcut=true, atom_selection_strategy=quick_skip, verbosity=normal)

      • use_sibling_shortcut (bool): use variable-based marking in addition to atom-based marking

      • atom_selection_strategy ({fast_downward, quick_skip, static_small, dynamic_small}): Strategy for selecting unsatisfied atoms from action preconditions or the goal atoms. All strategies use the fast_downward strategy for breaking ties.
        • fast_downward: select the atom (v, d) with the variable v that comes first in the Fast Downward variable ordering (which is based on the causal graph)
        • quick_skip: if possible, select an unsatisfied atom whose producers are already marked
        • static_small: select the atom achieved by the fewest number of actions
        • dynamic_small: select the atom achieved by the fewest number of actions that are not yet part of the stubborn set
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

      "},{"location":"PruningMethod/#limited_pruning","title":"Limited pruning","text":"

      Limited pruning applies another pruning method and switches it off after a fixed number of expansions if the pruning ratio is below a given value. The pruning ratio is the sum of all pruned operators divided by the sum of all operators before pruning, considering all previous expansions.

      limited_pruning(pruning, min_required_pruning_ratio=0.2, expansions_before_checking_pruning_ratio=1000, verbosity=normal)\n
      • pruning (PruningMethod): the underlying pruning method to be applied
      • min_required_pruning_ratio (double [0.0, 1.0]): disable pruning if the pruning ratio is lower than this value after 'expansions_before_checking_pruning_ratio' expansions
      • expansions_before_checking_pruning_ratio (int [0, infinity]): number of expansions before deciding whether to disable pruning
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

      Example: To use atom centric stubborn sets and limit them, use

      pruning=limited_pruning(pruning=atom_centric_stubborn_sets(),min_required_pruning_ratio=0.2,expansions_before_checking_pruning_ratio=1000)\n

      in an eager search such as astar.

      "},{"location":"PruningMethod/#no_pruning","title":"No pruning","text":"

      This is a skeleton method that does not perform any pruning, i.e., all applicable operators are applied in all expanded states.

      null(verbosity=normal)\n
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

      "},{"location":"PruningMethod/#stubbornsetsec","title":"StubbornSetsEC","text":"

      Stubborn sets represent a state pruning method which computes a subset of applicable operators in each state such that completeness and optimality of the overall search is preserved. As stubborn sets rely on several design choices, there are different variants thereof. The variant 'StubbornSetsEC' resolves the design choices such that the resulting pruning method is guaranteed to strictly dominate the Expansion Core pruning method. For details, see

      • Martin Wehrle, Malte Helmert, Yusra Alkhazraji and Robert Mattmueller. The Relative Pruning Power of Strong Stubborn Sets and Expansion Core. In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 251-259. AAAI Press, 2013.

        stubborn_sets_ec(verbosity=normal)

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

      "},{"location":"PruningMethod/#stubborn_sets_simple","title":"Stubborn sets simple","text":"

      Stubborn sets represent a state pruning method which computes a subset of applicable operators in each state such that completeness and optimality of the overall search is preserved. As stubborn sets rely on several design choices, there are different variants thereof. This stubborn set variant resolves the design choices in a straight-forward way. For details, see the following papers:

      • Yusra Alkhazraji, Martin Wehrle, Robert Mattmueller and Malte Helmert. A Stubborn Set Algorithm for Optimal Planning. In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), pp. 891-892. IOS Press, 2012.

      • Martin Wehrle and Malte Helmert. Efficient Stubborn Sets: Generalized Algorithms and Selection Strategies. In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 323-331. AAAI Press, 2014.

        stubborn_sets_simple(verbosity=normal)

      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.

        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note on verbosity parameter: Setting verbosity to verbose or higher enables time measurements in each call to prune_operators for a given state. This induces a significant overhead, up to 30% in configurations like blind search with the no pruning method (null). We recommend using at most normal verbosity for running experiments.

      "},{"location":"SearchAlgorithm/","title":"SearchAlgorithm","text":""},{"location":"SearchAlgorithm/#a_search_eager","title":"A* search (eager)","text":"

      A* is a special case of eager best first search that uses g+h as f-function. We break ties using the evaluator. Closed nodes are re-opened.

      astar(eval, lazy_evaluator=<none>, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • eval (Evaluator): evaluator for h-value
      • lazy_evaluator (Evaluator): An evaluator that re-evaluates a state before it is expanded.
      • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      lazy_evaluator: When a state s is taken out of the open list, the lazy evaluator h re-evaluates s. If h(s) changes (for example because h is path-dependent), s is not expanded, but instead reinserted into the open list. This option is currently only present for the A* algorithm.

      "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_eager_search","title":"Equivalent statements using general eager search","text":"
      --search astar(evaluator)\n

      is equivalent to

      --evaluator h=evaluator\n--search eager(tiebreaking([sum([g(), h]), h], unsafe_pruning=false),\n               reopen_closed=true, f_eval=sum([g(), h]))\n
      "},{"location":"SearchAlgorithm/#breadth-first_search","title":"Breadth-first search","text":"

      Breadth-first graph search.

      brfs(single_plan=true, write_plan=true, pruning=null(), verbosity=normal)\n
      • single_plan (bool): Stop search after finding the first (shortest) plan.
      • write_plan (bool): Store the necessary information during search for writing plans once they're found.
      • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#depth-first_search","title":"Depth-first search","text":"

      This is a depth-first tree search that avoids running in cycles by skipping states s that are already visited earlier on the path to s. Doing so, the search becomes complete.

      dfs(single_plan=false, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • single_plan (bool): stop after finding the first plan
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#exhaustive_search","title":"Exhaustive search","text":"

      Dump the reachable state space.

      dump_reachable_search_space()\n
      "},{"location":"SearchAlgorithm/#eager_best-first_search","title":"Eager best-first search","text":"
      eager(open, reopen_closed=false, f_eval=<none>, preferred=[], pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • open (OpenList): open list
      • reopen_closed (bool): reopen closed nodes
      • f_eval (Evaluator): set evaluator for jump statistics. (Optional; if no evaluator is used, jump statistics will not be displayed.)
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#greedy_search_eager","title":"Greedy search (eager)","text":"
      eager_greedy(evals, preferred=[], boost=0, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • evals (list of Evaluator): evaluators
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • boost (int): boost value for preferred operator open lists
      • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Open list: In most cases, eager greedy best first search uses an alternation open list with one queue for each evaluator. If preferred operator evaluators are used, it adds an extra queue for each of these evaluators that includes only the nodes that are generated with a preferred operator. If only one evaluator and no preferred operator evaluator is used, the search does not use an alternation open list but a standard open list with only one queue.

      Closed nodes: Closed node are not re-opened

      "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_eager_search_1","title":"Equivalent statements using general eager search","text":"
      --evaluator h2=eval2\n--search eager_greedy([eval1, h2], preferred=h2, boost=100)\n

      is equivalent to

      --evaluator h1=eval1 --heuristic h2=eval2\n--search eager(alt([single(h1), single(h1, pref_only=true), single(h2), \n                    single(h2, pref_only=true)], boost=100),\n               preferred=h2)\n
      --search eager_greedy([eval1, eval2])\n

      is equivalent to

      --search eager(alt([single(eval1), single(eval2)]))\n
      --evaluator h1=eval1\n--search eager_greedy(h1, preferred=h1)\n

      is equivalent to

      --evaluator h1=eval1\n--search eager(alt([single(h1), single(h1, pref_only=true)]),\n               preferred=h1)\n
      --search eager_greedy(eval1)\n

      is equivalent to

      --search eager(single(eval1))\n
      "},{"location":"SearchAlgorithm/#eager_weighted_a_search","title":"Eager weighted A* search","text":"
      eager_wastar(evals, preferred=[], reopen_closed=true, boost=0, w=1, pruning=null(), cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • evals (list of Evaluator): evaluators
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • reopen_closed (bool): reopen closed nodes
      • boost (int): boost value for preferred operator open lists
      • w (int): evaluator weight
      • pruning (PruningMethod): Pruning methods can prune or reorder the set of applicable operators in each state and thereby influence the number and order of successor states that are considered.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Open lists and equivalent statements using general eager search: See corresponding notes for \"(Weighted) A* search (lazy)\"

      Note: Eager weighted A search uses an alternation open list while A search uses a tie-breaking open list. Consequently,

      --search eager_wastar([h()], w=1)\n

      is not equivalent to

      --search astar(h())\n
      "},{"location":"SearchAlgorithm/#lazy_enforced_hill-climbing","title":"Lazy enforced hill-climbing","text":"
      ehc(h, preferred_usage=prune_by_preferred, preferred=[], cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • h (Evaluator): heuristic
      • preferred_usage ({prune_by_preferred, rank_preferred_first}): preferred operator usage
        • prune_by_preferred: prune successors achieved by non-preferred operators
        • rank_preferred_first: first insert successors achieved by preferred operators, then those by non-preferred operators
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#ida_search","title":"IDA* search","text":"

      IDA* search with an optional g-value cache.

      idastar(eval, initial_f_limit=0, cache_size=0, single_plan=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • eval (Evaluator): evaluator for h-value. Make sure to use cache_estimates=false.
      • initial_f_limit (int [0, infinity]): initial depth limit
      • cache_size (int [0, infinity]): maximum number of states to cache. For cache_size=infinity the cache fills up until approaching the memory limit, at which point the current number of states becomes the maximum cache size.
      • single_plan (bool): stop after finding the first plan
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#iterative_deepening_search","title":"Iterative deepening search","text":"
      ids(single_plan=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • single_plan (bool): stop after finding the first (shortest) plan
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#iterated_search","title":"Iterated search","text":"
      iterated(algorithm_configs, pass_bound=true, repeat_last=false, continue_on_fail=false, continue_on_solve=true, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • algorithm_configs (list of SearchAlgorithm): list of search algorithms for each phase
      • pass_bound (bool): use bound from previous search. The bound is the real cost of the plan found before, regardless of the cost_type parameter.
      • repeat_last (bool): repeat last phase of search
      • continue_on_fail (bool): continue search after no solution found
      • continue_on_solve (bool): continue search after solution found
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Note 1: We don't cache heuristic values between search iterations at the moment. If you perform a LAMA-style iterative search, heuristic values will be computed multiple times.

      Note 2: The configuration

      --search \"iterated([lazy_wastar([ipdb()],w=10), lazy_wastar([ipdb()],w=5), lazy_wastar([ipdb()],w=3), lazy_wastar([ipdb()],w=2), lazy_wastar([ipdb()],w=1)])\"\n

      would perform the preprocessing phase of the ipdb heuristic 5 times (once before each iteration).

      To avoid this, use heuristic predefinition, which avoids duplicate preprocessing, as follows:

      --evaluator \"h=ipdb()\" --search \"iterated([lazy_wastar([h],w=10), lazy_wastar([h],w=5), lazy_wastar([h],w=3), lazy_wastar([h],w=2), lazy_wastar([h],w=1)])\"\n

      Note 3: If you reuse the same landmark count heuristic (using heuristic predefinition) between iterations, the path data (that is, landmark status for each visited state) will be saved between iterations.

      "},{"location":"SearchAlgorithm/#iterated_width_search","title":"Iterated width search","text":"
      iw(width=2, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • width (int [1, 2]): maximum conjunction size
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output
      "},{"location":"SearchAlgorithm/#lazy_best-first_search","title":"Lazy best-first search","text":"
      lazy(open, reopen_closed=false, preferred=[], randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • open (OpenList): open list
      • reopen_closed (bool): reopen closed nodes
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • randomize_successors (bool): randomize the order in which successors are generated
      • preferred_successors_first (bool): consider preferred operators first
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

      "},{"location":"SearchAlgorithm/#greedy_search_lazy","title":"Greedy search (lazy)","text":"
      lazy_greedy(evals, preferred=[], reopen_closed=false, boost=1000, randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • evals (list of Evaluator): evaluators
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • reopen_closed (bool): reopen closed nodes
      • boost (int): boost value for alternation queues that are restricted to preferred operator nodes
      • randomize_successors (bool): randomize the order in which successors are generated
      • preferred_successors_first (bool): consider preferred operators first
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

      Open lists: In most cases, lazy greedy best first search uses an alternation open list with one queue for each evaluator. If preferred operator evaluators are used, it adds an extra queue for each of these evaluators that includes only the nodes that are generated with a preferred operator. If only one evaluator and no preferred operator evaluator is used, the search does not use an alternation open list but a standard open list with only one queue.

      "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_lazy_search","title":"Equivalent statements using general lazy search","text":"
      --evaluator h2=eval2\n--search lazy_greedy([eval1, h2], preferred=h2, boost=100)\n

      is equivalent to

      --evaluator h1=eval1 --heuristic h2=eval2\n--search lazy(alt([single(h1), single(h1, pref_only=true), single(h2),\n                  single(h2, pref_only=true)], boost=100),\n              preferred=h2)\n
      --search lazy_greedy([eval1, eval2], boost=100)\n

      is equivalent to

      --search lazy(alt([single(eval1), single(eval2)], boost=100))\n
      --evaluator h1=eval1\n--search lazy_greedy(h1, preferred=h1)\n

      is equivalent to

      --evaluator h1=eval1\n--search lazy(alt([single(h1), single(h1, pref_only=true)], boost=1000),\n              preferred=h1)\n
      --search lazy_greedy(eval1)\n

      is equivalent to

      --search lazy(single(eval1))\n
      "},{"location":"SearchAlgorithm/#weighted_a_search_lazy","title":"(Weighted) A* search (lazy)","text":"

      Weighted A* is a special case of lazy best first search.

      lazy_wastar(evals, preferred=[], reopen_closed=true, boost=1000, w=1, randomize_successors=false, preferred_successors_first=false, random_seed=-1, cost_type=normal, bound=infinity, max_time=infinity, verbosity=normal)\n
      • evals (list of Evaluator): evaluators
      • preferred (list of Evaluator): use preferred operators of these evaluators
      • reopen_closed (bool): reopen closed nodes
      • boost (int): boost value for preferred operator open lists
      • w (int): evaluator weight
      • randomize_successors (bool): randomize the order in which successors are generated
      • preferred_successors_first (bool): consider preferred operators first
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • cost_type ({normal, one, plusone}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
        • normal: all actions are accounted for with their real cost
        • one: all actions are accounted for as unit cost
        • plusone: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search algorithms, but is supported for both.
      • bound (int): exclusive depth bound on g-values. Cutoffs are always performed according to the real cost, regardless of the cost_type parameter
      • max_time (double): maximum time in seconds the search is allowed to run for. The timeout is only checked after each complete search step (usually a node expansion), so the actual runtime can be arbitrarily longer. Therefore, this parameter should not be used for time-limiting experiments. Timed-out searches are treated as failed searches, just like incomplete search algorithms that exhaust their search space.
      • verbosity ({silent, normal, verbose, debug}): Option to specify the verbosity level.
        • silent: only the most basic output
        • normal: relevant information to monitor progress
        • verbose: full output
        • debug: like verbose with additional debug output

      Successor ordering: When using randomize_successors=true and preferred_successors_first=true, randomization happens before preferred operators are moved to the front.

      Open lists: In the general case, it uses an alternation open list with one queue for each evaluator h that ranks the nodes by g + w * h. If preferred operator evaluators are used, it adds for each of the evaluators another such queue that only inserts nodes that are generated by preferred operators. In the special case with only one evaluator and no preferred operator evaluators, it uses a single queue that is ranked by g + w * h.

      "},{"location":"SearchAlgorithm/#equivalent_statements_using_general_lazy_search_1","title":"Equivalent statements using general lazy search","text":"
      --evaluator h1=eval1\n--search lazy_wastar([h1, eval2], w=2, preferred=h1,\n                     bound=100, boost=500)\n

      is equivalent to

      --evaluator h1=eval1 --heuristic h2=eval2\n--search lazy(alt([single(sum([g(), weight(h1, 2)])),\n                   single(sum([g(), weight(h1, 2)]), pref_only=true),\n                   single(sum([g(), weight(h2, 2)])),\n                   single(sum([g(), weight(h2, 2)]), pref_only=true)],\n                  boost=500),\n              preferred=h1, reopen_closed=true, bound=100)\n
      --search lazy_wastar([eval1, eval2], w=2, bound=100)\n

      is equivalent to

      --search lazy(alt([single(sum([g(), weight(eval1, 2)])),\n                   single(sum([g(), weight(eval2, 2)]))],\n                  boost=1000),\n              reopen_closed=true, bound=100)\n
      --search lazy_wastar([eval1, eval2], bound=100, boost=0)\n

      is equivalent to

      --search lazy(alt([single(sum([g(), eval1])),\n                   single(sum([g(), eval2]))])\n              reopen_closed=true, bound=100)\n
      --search lazy_wastar(eval1, w=2)\n

      is equivalent to

      --search lazy(single(sum([g(), weight(eval1, 2)])), reopen_closed=true)\n
      "},{"location":"ShrinkStrategy/","title":"ShrinkStrategy","text":"

      This page describes the various shrink strategies supported by the planner.

      "},{"location":"ShrinkStrategy/#bismulation_based_shrink_strategy","title":"Bismulation based shrink strategy","text":"

      This shrink strategy implements the algorithm described in the paper:

      • Raz Nissim, Joerg Hoffmann and Malte Helmert. Computing Perfect Heuristics in Polynomial Time: On Bisimulation and Merge-and-Shrink Abstractions in Optimal Planning.. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI 2011), pp. 1983-1990. AAAI Press, 2011.

        shrink_bisimulation(greedy=false, at_limit=return)

      • greedy (bool): use greedy bisimulation

      • at_limit ({return, use_up}): what to do when the size limit is hit
        • return: stop without refining the equivalence class further
        • use_up: continue refining the equivalence class until the size limit is hit

      shrink_bisimulation(greedy=true): Combine this with the merge-and-shrink options max_states=infinity and threshold_before_merge=1 and with the linear merge strategy reverse_level to obtain the variant 'greedy bisimulation without size limit', called M&S-gop in the IJCAI 2011 paper. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before shrinking (and no label reduction before merging).

      shrink_bisimulation(greedy=false): Combine this with the merge-and-shrink option max_states=N (where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000) and with the linear merge strategy reverse_level to obtain the variant 'exact bisimulation with a size limit', called DFP-bop in the IJCAI 2011 paper. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before shrinking (and no label reduction before merging).

      "},{"location":"ShrinkStrategy/#f-preserving_shrink_strategy","title":"f-preserving shrink strategy","text":"

      This shrink strategy implements the algorithm described in the paper:

      • Malte Helmert, Patrik Haslum and Joerg Hoffmann. Flexible Abstraction Heuristics for Optimal Sequential Planning. In Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS 2007), pp. 176-183. AAAI Press, 2007.

        shrink_fh(random_seed=-1, shrink_f=high, shrink_h=low)

      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

      • shrink_f ({high, low}): in which direction the f based shrink priority is ordered
        • high: prefer shrinking states with high value
        • low: prefer shrinking states with low value
      • shrink_h ({high, low}): in which direction the h based shrink priority is ordered
        • high: prefer shrinking states with high value
        • low: prefer shrinking states with low value

      Note: The strategy first partitions all states according to their combination of f- and h-values. These partitions are then sorted, first according to their f-value, then according to their h-value (increasing or decreasing, depending on the chosen options). States sorted last are shrinked together until reaching max_states.

      shrink_fh(): Combine this with the merge-and-shrink option max_states=N (where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000) and the linear merge startegy cg_goal_level to obtain the variant 'f-preserving shrinking of transition systems', called HHH in the IJCAI 2011 paper. Also see bisimulation based shrink strategy. When we last ran experiments on interaction of shrink strategies with label reduction, this strategy performed best when used with label reduction before merging (and no label reduction before shrinking). We also recommend using full pruning with this shrink strategy, because both distances from the initial state and to the goal states must be computed anyway, and because the existence of only one dead state causes this shrink strategy to always use the map-based approach for partitioning states rather than the more efficient vector-based approach.

      "},{"location":"ShrinkStrategy/#random","title":"Random","text":"
      shrink_random(random_seed=-1)\n
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"SubtaskGenerator/","title":"SubtaskGenerator","text":"

      Subtask generator (used by the CEGAR heuristic).

      "},{"location":"SubtaskGenerator/#goals","title":"goals","text":"
      goals(order=hadd_down, random_seed=-1)\n
      • order ({original, random, hadd_up, hadd_down}): ordering of goal or landmark facts
        • original: according to their (internal) variable index
        • random: according to a random permutation
        • hadd_up: according to their h^add value, lowest first
        • hadd_down: according to their h^add value, highest first
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      "},{"location":"SubtaskGenerator/#landmarks","title":"landmarks","text":"
      landmarks(order=hadd_down, random_seed=-1, combine_facts=true)\n
      • order ({original, random, hadd_up, hadd_down}): ordering of goal or landmark facts
        • original: according to their (internal) variable index
        • random: according to a random permutation
        • hadd_up: according to their h^add value, lowest first
        • hadd_down: according to their h^add value, highest first
      • random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.
      • combine_facts (bool): combine landmark facts with domain abstraction
      "},{"location":"SubtaskGenerator/#original","title":"original","text":"
      original(copies=1)\n
      • copies (int [1, infinity]): number of task copies
      "}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index b7ecd5aa89ed5dd109e6dad34f67808836297ea6..dcdbed8f7aa77ec1b6ea61e32201abc3931c6adb 100644 GIT binary patch delta 13 Ucmb=gXP58h;8=abb|QNP03NvnkN^Mx delta 13 Ucmb=gXP58h;PBF7oycAR02nO;NdN!<