This is the implementation for the paper Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation from ICML 2019. This repo is part of the software offered by Personal Robotics Lab@Imperial.
RED leverages the Trust Region Policy Policy Optimization (TRPO) implementation from OpenAI's baselines. Please refer to the baselines repo for installation prerequisites and instructions.
We provide implementation of three models in rnd_gail/folder
. They correspond to command line argument --reward=
0, 1 and 2.
- Random Expert Distillation (RED): reward function from expert support estimation with random prediction problems.
- AutoEncoder (AE): reward function from expert support estimation with autoencoder prediction.
- Generative Moment Matching Imitation Learning (GMMIL): benchmark method from this work.
To train a model:
$ python rnd_gail/mujoco_main.py --env_id=<environment_id> --reward=<reward_model> [additional arguments]
We have provided a working configuration of hyper parameters in rnd_gail/mujoco_main
for Mujoco tasks. To override them from the command line, please disable the defaults in the script first.
For instance, to train MuJoCo Hopper using RED for 2M timesteps
$ python rnd_gail/mujoco_main.py --env_id=Hopper-v2 --reward=0 --num_timesteps=2e6
Models are saved at <user_home>/workspace/checkpoint/mujoco/
.
To run a saved model:
$ python rnd_gail/run_expert.py --env_id=<environment_id> --pi=<model_filename>
To cite this work please refer to:
@inproceedings{wang2019random,
author = {Wang, Ruohan and Ciliberto, Carlo and Amadori, Pierlugi and Demiris, Yiannis},
title = {Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation},
year = {2019},
booktitle = {Proceedings of International Conference on Machine Learning},
organization = {ACM},
}