Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load and Generation Shedding #655

Open
wants to merge 21 commits into
base: dev_turnoff_gen_load
Choose a base branch
from

Conversation

DEUCE1957
Copy link
Contributor

@DEUCE1957 DEUCE1957 commented Nov 4, 2024

Based on Issue #623, this PR request aims to allow loads, generators, and energy storage to be disconnected from the grid by an agent without triggering an immediate game over.

Current implementation makes use of the existing 'set_bus' action. If an element is set to -1, it is considered disconnected / it has been shed from the grid. In order to do this, the backend must support allow_shedding (currently only supported by the PandaPowerBackend, see Issue LightSim2Grid.92 for request to add support to LightSim2Grid)).

  • Experiments / Notebook
  • Action Support (already included in set_bus)
  • Parameter support (only allow shedding if explicitly enabled)
  • Environment Support
  • Backend Support
  • PandaPowerBackend Support
  • Observation Support (visible in topology vector)
  • Backward compatibility
  • Support for critical elements (elements that cannot be disconnected, e.g. to represent a hospital)
  • Unit Tests
  • Documentation

@DEUCE1957 DEUCE1957 changed the base branch from master to dev_1.11.0 November 4, 2024 10:46
@DEUCE1957 DEUCE1957 force-pushed the dev_shedding branch 2 times, most recently from 204afe5 to 5c0ba18 Compare November 5, 2024 07:22
@DEUCE1957 DEUCE1957 marked this pull request as ready for review November 5, 2024 08:16
Copy link
Collaborator

@BDonnot BDonnot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello,

I know you have not finished, but I had some time and did some early review on this PR.

First, thanks a lot for everything here. I would say it's almost done (missing deeper tests for the core environment feature) + the doc (including description of the Markov Decision Process)

Globally, I would slightly modify the way you handle things in the Backend (see related comments)

I would also add (but you can merge this PR and we'll do that later) dedicated observation and action attributes. By doing so, you will probably need to also add some environment attribute. A mask for each loads and generators saying whether or not it has been "switched off".

And, but I don't blame you at all on this, I never documented it and no one except you went that far in coding in a new grid2op feature) you are missing the implementation of a few key methods of the Environment (and related classes) and GridObjects which I tried to list on some of my comments :-)

Anyway thanks a lot, super nice job (and fast! ) :-)

@@ -179,6 +179,8 @@ def __init__(self,
#: There is a difference between this and the class attribute.
#: You should not worry about the class attribute of the backend in :func:`Backend.apply_action`
self.n_busbar_per_sub: int = DEFAULT_N_BUSBAR_PER_SUB

self.set_shedding(allow_shedding)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using this kind of method to tell the backend to support "shedding" (that I would not call shedding, but rather "disconnecting" or something else - i'm not a fan of "disconnecting" either) can you do the same thing as I did in the support_multiple_busbar_per_substation

That is : having a default attribute (in my case _missing_two_busbars_support_info so in your case _missing_disconnecting_info) that the creator of the backend can either activate in its backend (by calling in my case can_handle_more_than_2_busbar) or deactivate (by calling explicitly cannot_handle_more_than_2_busbar ) a call to any of these functions will toggle to False the _missing_two_busbars_support_info flag and have another attribute (in my example n_busbar_per_sub in your example the allow_shedding flag, which in my opinion should be private)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have changed the interface for this now to match the missing busbars approach. Since n_busbars is not private, should I not keep that the same too if I make the _missing_detachment_support private?

if allow_shedding:
raise BackendError("Backend does not support shedding")
else:
self.allow_shedding = allow_shedding
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make this argument "private" by convention in python name should start with _ for "private" attribute.
So this would be self._allow_shedding = ...

Comment on lines 1029 to 1039
if not self.allow_shedding and (~self._grid.load["in_service"]).any():
# TODO see if there is a better way here -> do not handle this here, but rather in Backend._next_grid_state
raise pp.powerflow.LoadflowNotConverged("Disconnected load: for now grid2op cannot handle properly"
" disconnected load. If you want to disconnect one, say it"
" consumes 0. instead. Please check loads: "
f"{(~self._grid.load['in_service'].values).nonzero()[0]}"
)
if (~self._grid.gen["in_service"]).any():
if not self.allow_shedding and (~self._grid.gen["in_service"]).any():
# TODO see if there is a better way here -> do not handle this here, but rather in Backend._next_grid_state
raise pp.powerflow.LoadflowNotConverged("Disconnected gen: for now grid2op cannot handle properly"
" disconnected generators. If you want to disconnect one, say it"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be rather done in the function used by the Environment, that is the next_grid_state which in turn calls the _runpf_with_diverging_exception function.

So I think all these checks (same for check in DC by the way) should rather be done within the _runpf_with_diverging_exception function and not here :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(to be more explicit this should be in the Backend, madeby reading data from the backend (instead of using the self._grid) and removed from the PandaPowerBackend)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean I should get rid of the original (i.e. not something I wrote) if (~self._grid.gen["in_service"]).any(): check within the PandaPowerBackend as well then (i.e. move all these into the Backend class?)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly, all these checks have nothing to do in the specific backend implementation (in this case PandaPowerBackend) but should be moved to the base backend class (Backend).
And when doing this, these checks should be made "backend implementation" independant, so use only function of the public "backend" api.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to think a bit how to check this in a backend-agnostic way, since there is no 'in_service' for others. My solution was to use the topo_vect and check if any bus was -1 (detached). Now the check is in the _runpf_with_diverging_exception method

@@ -334,6 +334,7 @@ def __init__(
highres_sim_counter=None,
update_obs_after_reward=False,
n_busbar=2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n_busbar=2 should be n_busbar = DEFAULT_N_BUSBAR_PER_SUB (i know it's your mistake, it's mine, but i just spotted it :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -334,6 +334,7 @@ def __init__(
highres_sim_counter=None,
update_obs_after_reward=False,
n_busbar=2,
allow_shedding:bool=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allow_shedding= False should rather be allow_shedding = DEFAULT_ALLOW_SHEDDING (plus the first comment I made about the name "shedding"

@@ -86,6 +86,7 @@ def __init__(
parameters,
name="unknown",
n_busbar : N_BUSBAR_PER_SUB_TYPING=DEFAULT_N_BUSBAR_PER_SUB,
allow_shedding:bool=DEFAULT_ALLOW_SHEDDING,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when you have another attribute like this one in the constructor, you need also to implement:

  • get_kwargs
  • get_params_for_runner which in turn caused you to change the Runner class with the methods (from Runner):
    • __init__
  • init_obj_from_kwargs and modify the MaskedEnvironment and TimedOutEnvironment classes
  • the ObsEnv and the ObservationSpace also, to propagate this feature to the obs.simulate and co
  • maybe in this case the Simulator too
  • you might also need to adapt the _custom_deepcopy_for_copy depending on whether or not you added an attribute to the Environment class (a

@@ -127,6 +127,7 @@ def make_from_dataset_path(
logger=None,
experimental_read_from_local_dir=False,
n_busbar=2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n_busbar = DEFAULT_NB_BUSBAR_PER_SUBSTATION would be better :-)

@@ -127,6 +127,7 @@ def make_from_dataset_path(
logger=None,
experimental_read_from_local_dir=False,
n_busbar=2,
allow_shedding=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allow_shedding = DEFAULT_ALLOW_SHEDDING would be better here ;-)

@@ -513,6 +514,7 @@ class GridObjects:

sub_info : ClassVar[np.ndarray] = None
dim_topo : ClassVar[np.ndarray] = -1
allow_shedding : ClassVar[bool] = DEFAULT_ALLOW_SHEDDING
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you add class argument here you probably need to implement:

  • make_cls_dict (not sure you did)
  • from_dict (which I believe you did but i'm not sure)
  • _get_full_cls_str (almost certain you did not do it)

Yeah I know adding stuff in grid2op can be a pain :-( lots of glue code everywhere :-(

Copy link
Contributor Author

@DEUCE1957 DEUCE1957 Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My ethos was to change as few things as possible, since it will break most unit tests as well (for instance if I include allow_shedding in the dict representation it will break a bunch of unit tests that are checking against a stored Dict that does not include that attribute yet). Is there some way we could make those unit tests more dynamic (i.e. not hard-code the dictionary, string, etc. to compare to)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I specifically want these unit tests to be broken when some variables are added to the GridObjects. So that I can remember to change a few things here and there.

Yes there would be ways, but I prefer not to, because adding these kind of attributes should be done with extreme caution :-)
(you probably need to change something in the test_Action.py and test_Observation.py and that's it if I remember correctly)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see if it is intentional for those to break then it would seem they are successfully serving their function, will try to address this soon then!

_ = self.env.step(act)
obs, _, done, _ = self.env.step(self.env.action_space({}))
assert not done
assert obs.topo_vect[load_pos] == -1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need also to test:

  • Runner
  • environment copied
  • MultiMix environment
  • TimedOutEnvironment
  • MaskedEnvironment

(yes at this stage grid2op is far from "efficient"...)

@BDonnot BDonnot changed the base branch from dev_1.11.0 to dev_turnoff_gen_load November 5, 2024 15:50
@BDonnot
Copy link
Collaborator

BDonnot commented Nov 5, 2024

(Oh and also I changed it to the ad hoc branch I made)

@DEUCE1957
Copy link
Contributor Author

Hi Benjamin,

There seems to be an issue with the validity of get_topo_vect() inside the Backend.

This is how I currently check if the topology has disconnected a load or generator, the _get_topo_vect is a hack that I do not want to use but in some cases it seems that is more up-to-date than get_topo_vect().

            # Check if loads/gens have been detached and if this is allowed, otherwise raise an error
            # .. versionadded:: 1.11.0
            if hasattr(self, "_get_topo_vect"):
                topo_vect = self._get_topo_vect()
            else:
                topo_vect = self.get_topo_vect()
            load_buses = topo_vect[self.load_pos_topo_vect]
            if not self._allow_detachment and (load_buses == -1).any():
                raise Grid2OpException(f"One or more loads were detached before powerflow in Backend {type(self).__name__}"
                                        "but this is not allowed or not supported (Game Over)")

            gen_buses = topo_vect[self.gen_pos_topo_vect]
            if not self._allow_detachment and (gen_buses == -1).any():
                raise Grid2OpException(f"One or more generators were detached before powerflow in Backend {type(self).__name__}"
                                        "but this is not allowed or not supported (Game Over)")
            
            conv, exc_me = self.runpf(is_dc=is_dc)  # run powerflow

For intance in the test_AlarmFeature.py I was failing on the assert not done:

        act_ko1 = self.env.action_space()
        act_ko1.line_set_status = [(56, -1)]
        obs, reward, done, info = self.env.step(act_ko1)
        assert not done

Which I could resolve by using the private _get_topo_vect() instead of get_topo_vect.

However the private version does not always exist for instance in 'test_issue_125':

    def test_issue_125(self):
        # https://github.com/Grid2Op/grid2op/issues/125
        self.skip_if_needed()
        backend = self.make_backend_with_glue_code()
        with warnings.catch_warnings():
            warnings.filterwarnings("ignore")
            env = grid2op.make("rte_case14_realistic", test=True, backend=backend, _add_to_name=type(self).__name__)
        action = env.action_space({"set_bus": {"loads_id": [(1, -1)]}})
        obs, reward, am_i_done, info = env.step(action)
        assert info["is_illegal"] is False
        assert info["is_ambiguous"] is False
        assert len(info["exception"])

The assert len(info["exception"]) fails because no exception is thrown, as _get_topo_vect is not available inside the Backend and get_topo_vect() gives the wrong topology (all 1s, even though the action clearly should result in that element going to bus -1).

It is possible the error in test_issue_125 might be related to this ERR_INIT_POWERFLOW, but then I'd expect that also to be an issue without allow_detachment being implemented:

ERR_INIT_POWERFLOW = "Power cannot be computed on the first time step, please check your data."

Signed-off-by: Xavier Weiss <[email protected]>
Signed-off-by: Xavier Weiss <[email protected]>
Copy link

sonarcloud bot commented Nov 24, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants