Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Signal processing refactor #17

Open
wants to merge 25 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions .github/workflows/dev-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@ on:
jobs:
build:

runs-on: ubuntu-latest
name: Build OS ${{ matrix.os}} with Python ${{ matrix.python-version }}
runs-on: ${{ matrix.os}}
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10"]
os: [ubuntu-latest, windows-latest, macos-latest]

steps:
- uses: actions/checkout@v4
Expand All @@ -29,16 +31,20 @@ jobs:
run: python -m build --wheel

- name: Install wheel
if: matrix.os != 'windows-latest'
run: pip install dist/*.whl

# Pending to be added in the near future:
# * flake8 src --count --exit-zero --max-complexity=10
- name: Install wheel
if: matrix.os == 'windows-latest'
shell: bash
run: python -m pip install dist/*.whl

- name: Linting with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 src --count --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 99 chars wide
flake8 src --count --exit-zero --statistics
flake8 src --count --exit-zero --statistics --max-complexity=10

# - name: Run tests
# run: pytest
Expand Down
100 changes: 100 additions & 0 deletions docs/design/Signal_processing_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
===========================

SIGNAL PROCESSING WORKFLOW

(c) 2024 RTE
Developed by Grupo AIA

===========================

# Model Validation

## Get the curves to process

Read the file containing the **calculated curves**. The curves contained
in this file come from the result of a dynamic simulation performed
with Dynawo in the case where the user has entered the modeling of his
network. If the user has entered 2 sets of curves as inputs to the tool,
this file contains the producer's curves.

Read the file containing the **reference curves**.

## First resampling: Ensure constant time step signal.

This section consists of 3 steps:

- Convert the EMT signals to RMS, if necessary. Currently this step only
applies to **reference curves**. (The user can use a set of curves as input to
the tool, instead of a dynamic model, in this case should this step be applied
to the **calculated curves**?)

- Resample the curves using common frequency sampling.

- Pass the curves through a low-pass filter, using common cutoff values ​​and
frequency sampling.

## Second resampling: Ensure same time grid for both signals.

The tool shortens the **calculated and reference curves** to ensure that both
sets of curves have exactly the same time range.

## Third: Calculate signal windows.

The pre, during and post windows are calculated (pre and post if the event is
not temporary) taking into account the exclusion ranges for each window, as well
as the maximum length defined in the standards.

From the sets of curves, the sets of curves are generated for each window
obtained in the previous step.

## Validation tests

To run the validation tests, all the sets of curves obtained are used: pre,
during (if it exists), post and the complete curve of each set of the tool
(calculated and reference).

## Report

Only the complete curves from the **calculated curve** set and the
**reference curve** set are used to create the final report.

## Roadmap

- Provide a time shift option when comparing two curve sets to synchronize the
time at which the **reference curve** set event is triggered with the
**calculated curve** set.


# Performance Verifications

## Get the curves to process

Read the file containing the **calculated curves**. The curves contained
in this file come from the result of a dynamic simulation performed
with Dynawo in the case where the user has entered the modeling of his
network. If the user has entered a set of curves as input to the tool,
this file contains the producer's curves.

Read the file containing the **complementary curves**. This second set of curves
will only be shown in the final report together with the calculated curves;
they will not be validated under any circumstances.

## Validation tests

To run the validation tests, only the **calculated curves** set are used.

## Report

The curves from the **calculated curves** set and the
**complementary curves** set are used to create the final report.

## Roadmap

- Ensure that the time step signal is constant across all sets of curves,
if **complementary curves** exist.

- Ensure same time grid for both signals, if **complementary curves** exist.

- Provide a time shift option when there are two curve sets to synchronize the
time at which the **complementary curves** set event is triggered with the
**calculated curve** set.
1 change: 1 addition & 0 deletions docs/tutorial/general_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -648,6 +648,7 @@ The *operating condition* directory is structured in:
(dgcv_venv) user@dynawo:~/work/MyTests/Results$ tree PCS_RTE-I10/Islanding/DeltaP10DeltaQ4 -L 1
PCS_RTE-I10/Islanding/DeltaP10DeltaQ4
├── curves_calculated.csv
├── curves_reference.csv
├── Omega.dyd
├── Omega.par
├── outputs
Expand Down
12 changes: 7 additions & 5 deletions src/dgcv/core/execution_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ class Parameters:
Dynawo launcher
producer_model: Path
Producer Model directory
producer_curves: Path
producer_curves_path: Path
Producer curves directory
selected_pcs: str
Individual PCS to validate
Expand All @@ -46,8 +46,8 @@ def __init__(
self,
launcher_dwo: Path,
producer_model: Path,
producer_curves: Path,
reference_curves: Path,
producer_curves_path: Path,
reference_curves_path: Path,
selected_pcs: str,
output_dir: Path,
only_dtr: bool,
Expand All @@ -60,7 +60,9 @@ def __init__(
self._only_dtr = only_dtr

# Read producer inputs
self._producer = Producer(producer_model, producer_curves, reference_curves, sim_type)
self._producer = Producer(
producer_model, producer_curves_path, reference_curves_path, sim_type
)

tmp_path = config.get_value("Global", "temporal_path")
username = getpass.getuser()
Expand Down Expand Up @@ -170,4 +172,4 @@ def is_complete(self):
bool
True if it is a complete execution, False otherwise
"""
return self.is_valid() and self._producer.has_reference_curves()
return self.is_valid() and self._producer.has_reference_curves_path()
10 changes: 5 additions & 5 deletions src/dgcv/core/initialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from pathlib import Path

from dgcv.configuration.cfg import config
from dgcv.core.model_validation import ModelValidation
from dgcv.core.validation import Validation
from dgcv.dynawo.prepare_tool import precompile
from dgcv.files import manage_files
from dgcv.logging.logging import dgcv_logging
Expand Down Expand Up @@ -190,24 +190,24 @@ def init(launcher_dwo: Path, debug: bool) -> None:
manage_files.create_dir(config.get_config_dir())

manage_files.create_config_file(
ModelValidation.get_project_path() / "configuration" / "config.ini",
Validation.get_project_path() / "configuration" / "config.ini",
config.get_config_dir() / "config.ini_BASIC",
)

manage_files.create_config_file(
ModelValidation.get_project_path() / "configuration" / "defaultConfig.ini",
Validation.get_project_path() / "configuration" / "defaultConfig.ini",
config.get_config_dir() / "config.ini_ADVANCED",
)

if not _is_valid_config_file(config.get_config_dir() / "config.ini"):
if not (config.get_config_dir() / "config.ini").is_file():
manage_files.create_config_file(
ModelValidation.get_project_path() / "configuration" / "config.ini",
Validation.get_project_path() / "configuration" / "config.ini",
config.get_config_dir() / "config.ini",
)
else:
_check_config_file(
ModelValidation.get_project_path() / "configuration" / "defaultConfig.ini",
Validation.get_project_path() / "configuration" / "defaultConfig.ini",
config.get_config_dir() / "config.ini",
)

Expand Down
30 changes: 15 additions & 15 deletions src/dgcv/core/model_validation.py → src/dgcv/core/validation.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,17 @@

def _open_document(file: Path):
if os.name == "nt":
dgcv_logging.get_logger("ModelValidation").info(f"Opening the report: {file}")
dgcv_logging.get_logger("Validation").info(f"Opening the report: {file}")
subprocess.run(["start", file], shell=True)
else:
if shutil.which("open") and os.environ.get("DISPLAY"):
dgcv_logging.get_logger("ModelValidation").info(f"Opening the report: {file}")
dgcv_logging.get_logger("Validation").info(f"Opening the report: {file}")
subprocess.run(["open", file], check=True)
else:
dgcv_logging.get_logger("ModelValidation").info(f"Report saved in: {file}")
dgcv_logging.get_logger("Validation").info(f"Report saved in: {file}")


class ModelValidation:
class Validation:
"""Validation of producer inputs.
There are two types of validations, electrical performance and model validation.
Additionally, the electrical performance differs between the synchronous generator-type
Expand Down Expand Up @@ -75,37 +75,37 @@ def __init__(
validation_pcs.add(parameters.get_selected_pcs())

if parameters.get_sim_type() == ELECTRIC_PERFORMANCE_SM:
dgcv_logging.get_logger("ModelValidation").info(
dgcv_logging.get_logger("Validation").info(
"Electric Performance Verification for Synchronous Machines"
)
self.__get_validation_pcs(
validation_pcs, "electric_performance_verification_pcs", "performance/SM"
)

elif parameters.get_sim_type() == ELECTRIC_PERFORMANCE_PPM:
dgcv_logging.get_logger("ModelValidation").info(
dgcv_logging.get_logger("Validation").info(
"Electric Performance Verification for Power Park Modules"
)
self.__get_validation_pcs(
validation_pcs, "electric_performance_ppm_verification_pcs", "performance/PPM"
)

elif parameters.get_sim_type() == ELECTRIC_PERFORMANCE_BESS:
dgcv_logging.get_logger("ModelValidation").info(
dgcv_logging.get_logger("Validation").info(
"Electric Performance Verification for Storage"
)
self.__get_validation_pcs(
validation_pcs, "electric_performance_bess_verification_pcs", "performance/BESS"
)

elif parameters.get_sim_type() == MODEL_VALIDATION_PPM:
dgcv_logging.get_logger("ModelValidation").info(
dgcv_logging.get_logger("Validation").info(
"DGCV Model Validation for Power Park Modules"
)
self.__get_validation_pcs(validation_pcs, "model_ppm_validation_pcs", "model/PPM")

elif parameters.get_sim_type() == MODEL_VALIDATION_BESS:
dgcv_logging.get_logger("ModelValidation").info("DGCV Model Validation for Storage")
dgcv_logging.get_logger("Validation").info("DGCV Model Validation for Storage")
self.__get_validation_pcs(validation_pcs, "model_bess_validation_pcs", "model/BESS")

self._validation_pcs = validation_pcs
Expand Down Expand Up @@ -151,7 +151,7 @@ def __initialize_working_environment(self) -> None:
def __create_report(self, summary_list: list, report_results: dict) -> None:
"""Create the full report."""
sorted_summary_list = sorted(summary_list, key=attrgetter("id", "zone"))
dgcv_logging.get_logger("ModelValidation").debug(f"Sorted summary {sorted_summary_list}")
dgcv_logging.get_logger("Validation").debug(f"Sorted summary {sorted_summary_list}")
try:
report.create_pdf(
sorted_summary_list,
Expand All @@ -161,9 +161,9 @@ def __create_report(self, summary_list: list, report_results: dict) -> None:
)
except (LatexReportException, FileNotFoundError, IOError, ValueError) as e:
if dgcv_logging.getEffectiveLevel() == logging.DEBUG:
dgcv_logging.get_logger("ModelValidation").exception(f"Aborted execution. {e}")
dgcv_logging.get_logger("Validation").exception(f"Aborted execution. {e}")
else:
dgcv_logging.get_logger("ModelValidation").error(f"Aborted execution. {e}")
dgcv_logging.get_logger("Validation").error(f"Aborted execution. {e}")
exit(1)

for pcs_results in report_results.values():
Expand Down Expand Up @@ -211,7 +211,7 @@ def validate(self, is_test_validation: bool = False) -> list:
for pcs in self._pcs_list:
try:
if not pcs.is_valid():
dgcv_logging.get_logger("ModelValidation").error(
dgcv_logging.get_logger("Validation").error(
f"{pcs.get_name()} is not a valid PCS"
)
continue
Expand All @@ -226,9 +226,9 @@ def validate(self, is_test_validation: bool = False) -> list:
report_results[pcs.get_name()] = pcs_results
except (LatexReportException, FileNotFoundError, IOError, ValueError) as e:
if dgcv_logging.getEffectiveLevel() == logging.DEBUG:
dgcv_logging.get_logger("ModelValidation").exception(f"Aborted execution. {e}")
dgcv_logging.get_logger("Validation").exception(f"Aborted execution. {e}")
else:
dgcv_logging.get_logger("ModelValidation").error(f"Aborted execution. {e}")
dgcv_logging.get_logger("Validation").error(f"Aborted execution. {e}")
exit(1)

# Create the pcs report
Expand Down
1 change: 1 addition & 0 deletions src/dgcv/core/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@ def validate(
sim_output_path: str,
event_params: dict,
fs: float,
curves: dict,
) -> dict:
"""Virtual method"""
pass
Expand Down
Loading
Loading