wrap_env for custom tensor-based environment #97
-
Hi @Toni-SM , Thank you for the great library. I'm encountering some difficulties when trying to wrap a custom vectorized torch-based environment. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
It is not necessary to wrap an environment as long as it returns the variables and types required by the skrl trainers (as shown in the figure in the Wrapping page in skrl's docs). For example (for PyTorch): if your environment use gym/gymnasium and the class properties the returns (for import gymnasium as gym
class CustomEnv(gym.Env):
def __init__(self):
self.observation_space = ... # gym space
self.action_space = ... # gym space
self.num_envs = ... # int
self.device = ... # torch.device or str
def step(self, action):
...
# observation: tensor with shape (self.num_envs, OBSERVATION_SPACE_SIZE)
# reward: tensor with shape (self.num_envs, 1)
# terminated: tensor with shape (self.num_envs, 1)
# truncated: tensor with shape (self.num_envs, 1)
# info: dict
return observation, reward, terminated, truncated, info
def reset(self):
...
# observation: tensor with shape (self.num_envs, OBSERVATION_SPACE_SIZE)
# info: dict
return observation, info |
Beta Was this translation helpful? Give feedback.
Hi @khanhphan1311
It is not necessary to wrap an environment as long as it returns the variables and types required by the skrl trainers (as shown in the figure in the Wrapping page in skrl's docs).
For example (for PyTorch): if your environment use gym/gymnasium and the class properties the returns (for
.step()
and.reset()
) are as follows, you can use the environment directly without wrap it.