Skip to content

Commit

Permalink
Rename to gymnasium
Browse files Browse the repository at this point in the history
  • Loading branch information
pseudo-rnd-thoughts committed Sep 8, 2022
1 parent 316f616 commit 640c509
Show file tree
Hide file tree
Showing 289 changed files with 1,442 additions and 1,448 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ jobs:
--build-arg PYTHON_VERSION=${{ matrix.python-version }} \
--tag gym-docker .
- name: Run tests
run: docker run gym-docker pytest
run: docker run gymnasium-docker pytest
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ repos:
hooks:
- id: flake8
args:
- '--per-file-ignores=*/__init__.py:F401 gym/envs/registration.py:E704'
- '--per-file-ignores=*/__init__.py:F401 gymnasium/envs/registration.py:E704'
- --ignore=E203,W503,E741
- --max-complexity=30
- --max-line-length=456
Expand All @@ -30,7 +30,7 @@ repos:
rev: 6.1.1 # pick a git hash / tag to point to
hooks:
- id: pydocstyle
exclude: ^(gym/version.py)|(gym/envs/)|(tests/)
exclude: ^(gymnasium/version.py)|(gymnasium/envs/)|(tests/)
args:
- --source
- --explain
Expand Down
10 changes: 2 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,15 @@
# Gym Contribution Guidelines
# Gymnasium Contribution Guidelines

At this time we are currently accepting the current forms of contributions:

- Bug reports (keep in mind that changing environment behavior should be minimized as that requires releasing a new version of the environment and makes results hard to compare across versions)
- Pull requests for bug fixes
- Documentation improvements
- Features

Notably, we are not accepting these forms of contributions:

- New environments
- New features

This may change in the future.
If you wish to make a Gym environment, follow the instructions in [Creating Environments](https://github.com/openai/gym/blob/master/docs/creating_environments.md). When your environment works, you can make a PR to add it to the bottom of the [List of Environments](https://github.com/openai/gym/blob/master/docs/third_party_environments.md).


Edit July 27, 2021: Please see https://github.com/openai/gym/issues/2259 for new contributing standards

# Development
This section contains technical instructions & hints for the contributors.
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,13 @@ We support Python 3.7, 3.8, 3.9 and 3.10 on Linux and macOS. We will accept PRs
The Gym API's API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple- here's an example using the "CartPole-v1" environment:

```python
import gym
import gymnasium as gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)

for _ in range(1000):
action = env.action_space.sample()
observation, reward, terminated, truncarted, info = env.step(action)
observation, reward, terminated, truncated, info = env.step(action)

if terminated or truncated:
observation, info = env.reset()
Expand All @@ -43,7 +43,7 @@ env.close()
* [Stable Baselines 3](https://github.com/DLR-RM/stable-baselines3) is a learning library based on the Gym API. It is designed to cater to complete beginners in the field who want to start learning things quickly.
* [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) builds upon SB3, containing optimal hyperparameters for Gym environments as well as code to easily find new ones.
* [Tianshou](https://github.com/thu-ml/tianshou) is a learning library that's geared towards very experienced users and is design to allow for ease in complex algorithm modifications.
* [RLlib](https://docs.ray.io/en/latest/rllib/index.html) is a learning library that allows for distributed training and inferencing and supports an extraordinarily large number of features throughout the reinforcement learning space.
* [RLlib](https://docs.ray.io/en/latest/rllib/index.html) is a learning library that allows for distributed training and inference and supports an extraordinarily large number of features throughout the reinforcement learning space.
* [PettingZoo](https://github.com/Farama-Foundation/PettingZoo) is like Gym, but for environments with multiple agents.

## Environment Versioning
Expand All @@ -70,4 +70,4 @@ A whitepaper from when Gym just came out is available https://arxiv.org/pdf/1606

## Release Notes

There used to be release notes for all the new Gym versions here. New release notes are being moved to [releases page](https://github.com/openai/gym/releases) on GitHub, like most other libraries do. Old notes can be viewed [here](https://github.com/openai/gym/blob/31be35ecd460f670f0c4b653a14c9996b7facc6c/README.rst).
There used to be release notes for all the new Gym versions here. New release notes are being moved to [releases page](https://github.com/Farama-Foundation/Gymnasium/releases) on GitHub, like most other libraries do.
3 changes: 0 additions & 3 deletions gym/envs/box2d/__init__.py

This file was deleted.

5 changes: 0 additions & 5 deletions gym/envs/classic_control/__init__.py

This file was deleted.

19 changes: 0 additions & 19 deletions gym/envs/mujoco/__init__.py

This file was deleted.

4 changes: 0 additions & 4 deletions gym/envs/toy_text/__init__.py

This file was deleted.

23 changes: 0 additions & 23 deletions gym/vector/utils/__init__.py

This file was deleted.

23 changes: 0 additions & 23 deletions gym/wrappers/__init__.py

This file was deleted.

18 changes: 9 additions & 9 deletions gym/__init__.py → gymnasium/__init__.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
"""Root __init__ of the gym module setting the __all__ of gym modules."""
"""Root __init__ of the gymnasium module setting the __all__ of gymnasium modules."""
# isort: skip_file

from gym import error
from gym.version import VERSION as __version__
from gymnasium import error
from gymnasium.version import VERSION as __version__

from gym.core import (
from gymnasium.core import (
Env,
Wrapper,
ObservationWrapper,
ActionWrapper,
RewardWrapper,
)
from gym.spaces import Space
from gym.envs import make, spec, register
from gym import logger
from gym import vector
from gym import wrappers
from gymnasium.spaces import Space
from gymnasium.envs import make, spec, register
from gymnasium import logger
from gymnasium import vector
from gymnasium import wrappers
import os
import sys

Expand Down
22 changes: 11 additions & 11 deletions gym/core.py → gymnasium/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@

import numpy as np

from gym import spaces
from gym.logger import warn
from gym.utils import seeding
from gymnasium import spaces
from gymnasium.logger import warn
from gymnasium.utils import seeding

if TYPE_CHECKING:
from gym.envs.registration import EnvSpec
from gymnasium.envs.registration import EnvSpec

if sys.version_info[0:2] == (3, 6):
warn(
Expand Down Expand Up @@ -51,7 +51,7 @@ class Env(Generic[ObsType, ActType]):
- :attr:`action_space` - The Space object corresponding to valid actions
- :attr:`observation_space` - The Space object corresponding to valid observations
- :attr:`reward_range` - A tuple corresponding to the minimum and maximum possible rewards
- :attr:`spec` - An environment spec that contains the information used to initialise the environment from `gym.make`
- :attr:`spec` - An environment spec that contains the information used to initialise the environment from `gymnasium.make`
- :attr:`metadata` - The metadata of the environment, i.e. render modes
- :attr:`np_random` - The random number generator for the environment
Expand Down Expand Up @@ -188,7 +188,7 @@ def unwrapped(self) -> "Env":
"""Returns the base non-wrapped environment.
Returns:
Env: The base non-wrapped gym.Env instance
Env: The base non-wrapped gymnasium.Env instance
"""
return self

Expand Down Expand Up @@ -362,7 +362,7 @@ class ObservationWrapper(Wrapper):
``observation["target_position"] - observation["agent_position"]``. For this, you could implement an
observation wrapper like this::
class RelativePosition(gym.ObservationWrapper):
class RelativePosition(gymnasium.ObservationWrapper):
def __init__(self, env):
super().__init__(env)
self.observation_space = Box(shape=(2,), low=-np.inf, high=np.inf)
Expand Down Expand Up @@ -402,7 +402,7 @@ class RewardWrapper(Wrapper):
because it is intrinsic), we want to clip the reward to a range to gain some numerical stability.
To do that, we could, for instance, implement the following wrapper::
class ClipReward(gym.RewardWrapper):
class ClipReward(gymnasium.RewardWrapper):
def __init__(self, env, min_reward, max_reward):
super().__init__(env)
self.min_reward = min_reward
Expand Down Expand Up @@ -433,10 +433,10 @@ class ActionWrapper(Wrapper):
In that case, you need to specify the new action space of the wrapper by setting :attr:`self.action_space` in
the :meth:`__init__` method of your wrapper.
Let’s say you have an environment with action space of type :class:`gym.spaces.Box`, but you would only like
Let’s say you have an environment with action space of type :class:`gymnasium.spaces.Box`, but you would only like
to use a finite subset of actions. Then, you might want to implement the following wrapper::
class DiscreteActions(gym.ActionWrapper):
class DiscreteActions(gymnasium.ActionWrapper):
def __init__(self, env, disc_to_cont):
super().__init__(env)
self.disc_to_cont = disc_to_cont
Expand All @@ -446,7 +446,7 @@ def action(self, act):
return self.disc_to_cont[act]
if __name__ == "__main__":
env = gym.make("LunarLanderContinuous-v2")
env = gymnasium.make("LunarLanderContinuous-v2")
wrapped_env = DiscreteActions(env, [np.array([1,0]), np.array([-1,0]),
np.array([0,1]), np.array([0,-1])])
print(wrapped_env.action_space) #Discrete(4)
Expand Down
Loading

0 comments on commit 640c509

Please sign in to comment.