Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docstring of normalize reward #1136

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion gymnasium/wrappers/stateful_reward.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,9 @@
class NormalizeReward(
gym.Wrapper[ObsType, ActType, ObsType, ActType], gym.utils.RecordConstructorArgs
):
r"""Normalizes immediate rewards such that their exponential moving average has a fixed variance.
r"""This wrapper will scale rewards s.t. the discounted returns have a mean of 0 and std of 1.

In a nutshell, the rewards are divided through by the standard deviation of a rolling discounted sum of the reward.
The exponential moving average will have variance :math:`(1 - \gamma)^2`.

The property `_update_running_mean` allows to freeze/continue the running mean calculation of the reward
Expand All @@ -29,6 +30,11 @@ class NormalizeReward(

A vector version of the wrapper exists :class:`gymnasium.wrappers.vector.NormalizeReward`.

Important note:
Contrary to what the name suggests, this wrapper does not normalize the rewards to have a mean of 0 and a standard
deviation of 1. Instead, it scales the rewards such that **discounted returns** have approximately unit variance.
See [Engstrom et al.](https://openreview.net/forum?id=r1etN1rtPB) on "reward scaling" for more information.

Note:
In v0.27, NormalizeReward was updated as the forward discounted reward estimate was incorrectly computed in Gym v0.25+.
For more detail, read [#3154](https://github.com/openai/gym/pull/3152).
Expand Down
8 changes: 7 additions & 1 deletion gymnasium/wrappers/vector/stateful_reward.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,20 @@


class NormalizeReward(VectorWrapper, gym.utils.RecordConstructorArgs):
r"""This wrapper will normalize immediate rewards s.t. their exponential moving average has a fixed variance.
r"""This wrapper will scale rewards s.t. the discounted returns have a mean of 0 and std of 1.

In a nutshell, the rewards are divided through by the standard deviation of a rolling discounted sum of the reward.
The exponential moving average will have variance :math:`(1 - \gamma)^2`.

The property `_update_running_mean` allows to freeze/continue the running mean calculation of the reward
statistics. If `True` (default), the `RunningMeanStd` will get updated every time `self.normalize()` is called.
If False, the calculated statistics are used but not updated anymore; this may be used during evaluation.

Important note:
Contrary to what the name suggests, this wrapper does not normalize the rewards to have a mean of 0 and a standard
deviation of 1. Instead, it scales the rewards such that **discounted returns** have approximately unit variance.
See [Engstrom et al.](https://openreview.net/forum?id=r1etN1rtPB) on "reward scaling" for more information.

Note:
The scaling depends on past trajectories and rewards will not be scaled correctly if the wrapper was newly
instantiated or the policy was changed recently.
Expand Down
Loading