Skip to content

psrenergy/CompareScenariosGenerators.jl

Repository files navigation

Executing and Evaluating a Model

Overview

This guide provides instructions on how to execute and evaluate a model. The process involves creating a script and launching file in a specified directory, creating an evaluation configuration file, and then executing a main file to compare the models.

Instructions

  1. Create a directory with your model name where you will write your script, the output of your script should be in the same format as the csv file model_output_example (generated for 12 steps ahead and 10 scenarios). Within this directory, create a launching file (.bat file) that will execute your script. Some examples are written in the test/execution_example directory on how to write these files.
  2. Create an evaluation_config.toml file in the same directory. This file will contain the parameters of your model, as well as the path of your model and the time series data, the description of all the parameters is given below.
  3. To compare models, execute the main.jl file. This file uses two functions :
    • The first function, CompareScenariosGenerators.main_evaluation_loop, will evaluate the models with various metrics. You have to specify in this function the path of the configuration files in a list of the models you want to evaluate and compare.
    • The second function, CompareScenariosGenerators.main_report_metrics, will plot the metrics of the different models to allow for comparison. It takes as an argument a list of the path of the metrics.json files of each model and are generated by the first function.
  4. Now just open the html files generated in the Compare_Scenarios_Reports directory.

Directory structure

Your model should follow the following structure :

└── models
    └── model_name1
        ├── model_script_name1
        ├── model_launching_file1.bat
        └── evaluation_config.toml
    └── model_name2
        ├── model_script_name2
        ├── model_launching_file2.bat
        └── evaluation_config.toml
    ```

Configuration files

evaluation_config.toml

The file evaluation_config.toml stores all the parameters and paths necessary for the model These are all of the available options in evaluation_config.toml

  • time_series_path - The path of the time series csv file, here is an example
  • num_steps_ahead - The number of steps forward to forecast
  • num_scenarios - The number of scenarios to forecast
  • num_windows - The number of windows to forecast with
  • model_executable_path - The path of the model script
  • model_name - The model name
  • plot_all_scenarios_windows - true to plot all scenarios windows

Example of evaluation_config.toml:

time_series_path = ".//CompareScenariosGenerators.jl//test//execution_example//enas.csv"
num_steps_ahead = 12
num_scenarios = 10
num_windows = 2
model_executable_path = ".//CompareScenariosGenerators.jl//test//execution_example//model_launching_files//random_forecast.bat"
model_name = "random_forecast"
plot_all_scenarios_windows = true

Results

After running the metrics calculation and model comparison functions, html files are generated in the Compare_Scenarios_Reports folder. For each region, a graph is generated per metric. There is also a graph by metrics adding the results of each region for a more global comparison. Here is an example of a graph representing the bias of 5 different models for the NORTE region :

newplot

References

The following documents can help to understand better how forecasting works, how to evaluate a forecast and so how this repository can be useful :

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published