Release 1.6.4
Some quality of life features and minor speed improvments
Breaking changes
- the name of the python file for the "agent" module are now lowercase (complient with PEP). If you
did things likefrom grid2op.Agent.BaseAgent import BaseAgent
you need to change it like
from grid2op.Agent.baseAgent import BaseAgent
or even better, and this is the preferred way to include
them:from grid2op.Agent import BaseAgent
It should not affect lots of code.
Fixed issues
- a bug where the shunt had a voltage when disconnected using pandapower backend
- a bug preventing to print the action space if some "part" of it had no size (empty action space)
- a bug preventing to copy an action properly (especially for the alarm)
- a bug that did not "close" the backend of the observation space when the environment was
closed
. This
might be related to #255
New features
- serialization of
current_iter
andmax_iter
in the observation. - the possibility to use the runner only on certain episode id
(seerunner.run(..., episode_id=[xxx, yyy, ...])
) - a function that returns if an action has any change to modify the grid see
act.can_affect_something()
- a ttype of agent that performs predefined actions from a given list
- basic support for logging in environment and runner (more coming soon)
- possibility to make an environment with an implementation of a reward, instead of relying on a reward class.
- a possible implementation of a N-1 reward
Improvements
- right time stamp is now set in the observation after the game over.
- correct current number of steps when the observation is set to a game over state.
- documentation to clearly state that the action_class should not be modified.
- possibility to tell which chronics to use with the result of
env.chronics_handler.get_id()
(this is also
compatible in the runner) - it is no more possible to call "env.reset()" or "env.step()" after an environment has been closed: a clean error
is raised in this case.