This guide provides instructions on how to execute and evaluate a model. The process involves creating a script and launching file in a specified directory, creating an evaluation configuration file, and then executing a main file to compare the models.
- Create a directory with your model name where you will write your script, the output of your script should be in the same format as the csv file model_output_example (generated for 12 steps ahead and 10 scenarios). Within this directory, create a launching file (.bat file) that will execute your script. Some examples are written in the test/execution_example directory on how to write these files.
- Create an evaluation_config.toml file in the same directory. This file will contain the parameters of your model, as well as the path of your model and the time series data, the description of all the parameters is given below.
- To compare models, execute the main.jl file. This file uses two functions :
- The first function, CompareScenariosGenerators.main_evaluation_loop, will evaluate the models with various metrics. You have to specify in this function the path of the configuration files in a list of the models you want to evaluate and compare.
- The second function, CompareScenariosGenerators.main_report_metrics, will plot the metrics of the different models to allow for comparison. It takes as an argument a list of the path of the metrics.json files of each model and are generated by the first function.
- Now just open the html files generated in the Compare_Scenarios_Reports directory.
Your model should follow the following structure :
└── models
└── model_name1
├── model_script_name1
├── model_launching_file1.bat
└── evaluation_config.toml
└── model_name2
├── model_script_name2
├── model_launching_file2.bat
└── evaluation_config.toml
```
The file evaluation_config.toml
stores all the parameters and paths necessary for the model
These are all of the available options in evaluation_config.toml
time_series_path
- The path of the time series csv file, here is an examplenum_steps_ahead
- The number of steps forward to forecastnum_scenarios
- The number of scenarios to forecastnum_windows
- The number of windows to forecast withmodel_executable_path
- The path of the model scriptmodel_name
- The model nameplot_all_scenarios_windows
- true to plot all scenarios windows
Example of evaluation_config.toml
:
time_series_path = ".//CompareScenariosGenerators.jl//test//execution_example//enas.csv"
num_steps_ahead = 12
num_scenarios = 10
num_windows = 2
model_executable_path = ".//CompareScenariosGenerators.jl//test//execution_example//model_launching_files//random_forecast.bat"
model_name = "random_forecast"
plot_all_scenarios_windows = true
After running the metrics calculation and model comparison functions, html files are generated in the Compare_Scenarios_Reports folder. For each region, a graph is generated per metric. There is also a graph by metrics adding the results of each region for a more global comparison. Here is an example of a graph representing the bias of 5 different models for the NORTE region :
The following documents can help to understand better how forecasting works, how to evaluate a forecast and so how this repository can be useful :