This repository contains a bunch of scripts for running experiments with planning domain models in Picat. However it can be configured for use with other planners too.
Scripts described below has some dependencies:
- Working planner(s) installed. All planners should be available on
PATH
. - SLURM workload manager. In particular the command sbatch which is used in
cplan_all.sh
script.
Following steps should provide enough guidance to run your own set of experiments.
- Get domain models and problem descriptions:
- Prepare all your model files in a directory (refered as
models
). Each file at themodels
directory should represent different domain model in given domain. - Call
./prepareDomain.sh domainName models
to prepare directory structure - Edit files
mod_list
,pla_list
andprob_list
to configure your experiment. These files has to be simple lists - one record on one line. Number of experiments run will be#models X #planners X #problems
.
mod_list
- names of domain models to use (those are stored inmodels
directory for each domain). Models are listed with.pddl
extension.pla_list
- names of planners to use. One planner on the line (e.g.:best_plan
). Planners listed here refers to configuration files in theplanners
directory.prob_list
- list of problems fromproblems
directory to use in the experiment.
- Call
./generate_array_file.sh domainName
to generate filearray.in
. The file lists one experiment configuration on line. - Call
./launch_array.sh array.in
. This will submit an array job to cluster. Results of the experiment will be stored inresults/domainName
and logs will be inlogs
directory.
-
launch_array.sh
- this script takes one parameter - the name of array file with descriptions of experiments. It submits an array job to the cluster. Batch size can be modified by changingBATCH_SIZE
-
cplan_array.sh
- this scipt is called by thelaunch_array.sh
script for each experiment configuration that is to be processed Parameters for SLURM (example):
#SBATCH --chdir /path/to/planner_model_testing
#SBATCH -p cpu
#SBATCH --mem 1G
#SBATCH -t 30:00
--chdir <DIR>
sets the working directory for given SLURM job-p <partition>
sets the SLURM partition to use--mem <memory-limit>
using memory over limit cause job termination-t MM:SS
job will be terminated if it does not finish within given time limit
generate_array_file.sh
- this script is used to generate all experiment configuration based on themod_list
,pla_list
andprob_list
files in each domain from thedomains
directory. By default it generate experiments for all domains in thedomains
directory. You can select domains for which you want to generate experiments by passing their list as arguments. e.g.:
./generate_array_file.sh depots nomystery
generates only task configurations for depots and nomystery domains
4. prepareDomain.sh
- script to initialize directory structure for new domain