-
Notifications
You must be signed in to change notification settings - Fork 2
Post processing
After a simulation run, it's possible to run some post-processing step to generate additional datasets with different settings, using the postprocessing.py
helper script in the main THOR repository in THOR/tools
on an output folder.
This script finds the last configuration and last output file in the output folder, copies the configuration files, overrides the variables asked to be overridden, and runs THOR again with those settings, continuing from the last output. It's output goes into a sub folder, and the grid and planet files are copied to that folder, so that plotting scripts can find their data.
THOR $ python tools/postprocessingsteps.py --help
usage: postprocessingsteps.py [-h] [-f POSTFIX] [-H] [--hiresfiles HIRESFILES]
[-o] [-b] [-p] [-n NUMSTEP] [-c NUMCOL]
data_folder
execute THOR and alfrodull postprocessing step
positional arguments:
data_folder folder to look for data
optional arguments:
-h, --help show this help message and exit
-f POSTFIX, --postfix POSTFIX
postfix to append to output dir
-H, --hires use hires spectrum
--hiresfiles HIRESFILES
pathes to files used for hires spectrum, separated by
commas
-o, --opacities output g0 and w0
-b, --beam output directional beam spectrum
-p, --noplanck set planck function to zero
-n NUMSTEP, --numstep NUMSTEP
number of postprocessing steps to run
-c NUMCOL, --numcol NUMCOL
number of parallel columns to run
It can create:
- simulations higher resolution spectrum with the
-H
flag, using the default input files for hires, or use other files specifies with--hiresfiles "opacityfile,spectrumfile,cloudsfile"
, e.g. `--hiresfiles "./Alfrodull/input/wasp43b/opac_sample_SI_r500.h5,./Alfrodull/input/stellar_spectrum_wasp43_r500.h5,./Alfrodull/input/clouds_enstatite_r500.h5" - add
g0
andw0
tables to the output with-o
, that are usually not included for space reasons. - add directional beam per column, layer and wavelength with
-b
- turn Planck function to zero in equations with
-p
It needs a folder data_folder
to know on what folder it needs to run.
Additional optional parameters:
- the number of additional steps to run with
-n
. - the post-fix for the output. Output goes to
result_postprocessing_POSTFIX
folder in the data folder it's running on. - the number of columns to run in parallel with
-c
. As the post-processing options enable options that use more memory, it is often necessary to lower the number of of columns the simulation runs on in parallel to be able to fit in memory.
All the options that are settable with this script can also be set manually in the configuration file.
The slurm helper script, slurm_batch_run.py
can also launch post-processing steps on a cluster managed by slurm.
For example, to run only post-processing, with job name thor_postproc
, point it to the output folder ../thor-data/wasp43b/
, and run it with 0 THOR jobs and a dummy THOR config name, as the post-processing will find it's config. Use the --pp
option with the list of post-processing options for each post-processing to run. Here it runs two post-processing runs, adding all the output data wanted and turning of the Planck function for one of the outputs.
THOR $ python3 tools/slurm_batch_run.py -jn thor_postproc -n 0 -o ../thor-data/wasp43b/ dummy.thr --pp " -b -o -H -c 20 -n 5 -f dump_hires" --pp " -p -b -o -H -c 20 -n 5 -f noplank_hires"
use a real config and non null job numbers to run it after a simulation run.