Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docstring of normalize reward #1136

Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion gymnasium/wrappers/stateful_reward.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
class NormalizeReward(
gym.Wrapper[ObsType, ActType, ObsType, ActType], gym.utils.RecordConstructorArgs
):
r"""Normalizes immediate rewards such that their exponential moving average has a fixed variance.
r"""Scales immediate rewards such that their exponential moving average has a fixed variance.

The exponential moving average will have variance :math:`(1 - \gamma)^2`.

Expand All @@ -29,6 +29,10 @@ class NormalizeReward(

A vector version of the wrapper exists :class:`gymnasium.wrappers.vector.NormalizeReward`.

Important note:
Contrary to what the name suggests, this wrapper does not normalize the rewards to have a mean of 0 and a standard
deviation of 1. Instead, it scales the rewards such that their exponential moving average has a fixed variance.

Note:
In v0.27, NormalizeReward was updated as the forward discounted reward estimate was incorrectly computed in Gym v0.25+.
For more detail, read [#3154](https://github.com/openai/gym/pull/3152).
Expand Down
6 changes: 5 additions & 1 deletion gymnasium/wrappers/vector/stateful_reward.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,18 @@


class NormalizeReward(VectorWrapper, gym.utils.RecordConstructorArgs):
r"""This wrapper will normalize immediate rewards s.t. their exponential moving average has a fixed variance.
r"""This wrapper will scale immediate rewards s.t. their exponential moving average has a fixed variance.

The exponential moving average will have variance :math:`(1 - \gamma)^2`.

The property `_update_running_mean` allows to freeze/continue the running mean calculation of the reward
statistics. If `True` (default), the `RunningMeanStd` will get updated every time `self.normalize()` is called.
If False, the calculated statistics are used but not updated anymore; this may be used during evaluation.

Important note:
Contrary to what the name suggests, this wrapper does not normalize the rewards to have a mean of 0 and a standard
deviation of 1. Instead, it scales the rewards such that their exponential moving average has a fixed variance.

Note:
The scaling depends on past trajectories and rewards will not be scaled correctly if the wrapper was newly
instantiated or the policy was changed recently.
Expand Down
Loading