How to render observations generated from a learned transition model #356
-
I am learning a transition model that predicts future observations given actions in a model-based RL setting. I am wondering if it's possible to render these predicted observations? NB I am only learning to predict the observations not the full pipineline_state. In openai gym mujoco environments, it's possible to to set the environment state by calling 'env.set_state(qpos, qvel)' (and this new state can subsequently be rendered). Is there a similar way to set the state (using observations) in brax? From what I can tell brax uses state.pipeline_state.x.pos and state.pipeline_state.x.rot to render images, so there are two parts to my question:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
You should try You must use |
Beta Was this translation helpful? Give feedback.
You should try
x, _ = kinematics.forward(sys, qpos, qvel)
.qvel
is not really required as you don't care for the returnedMotion
vector.You must use
state.pipeline_state._replace(x=x)