Official implementation of the paper "PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios" (CVPR 2024) for Unitree h1 Robot.
[March 8, 2024] Training code released. This project is still under development.
- [] Release the inference environment and our web interface.
- [] Release the training code for CNN model.
- Release my checkpoints.
- Release the modified version for Unitree H1 Robot in uniree_h1 branch.
- Release an example city environment.
- Release the training code for basic model.
To create the environment, follow the following instructions:
- Create new conda environment and install pytroch:
conda create -n isaac python=3.8
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirement.txt
-
Download and setup Isaac Gym.
-
Follow PHC to download the SMPL parameters. Put them in the
data/smpl
folder, unzip them into 'data/smpl' folder. Please download the v1.1.0 version, which contains the neutral humanoid. Rename the filesbasicmodel_neutral_lbs_10_207_0_v1.1.0
,basicmodel_m_lbs_10_207_0_v1.1.0.pkl
,basicmodel_f_lbs_10_207_0_v1.1.0.pkl
toSMPL_NEUTRAL.pkl
,SMPL_MALE.pkl
andSMPL_FEMALE.pkl
. Rename The file structure should look like this:
|-- data
|-- smpl
|-- SMPL_FEMALE.pkl
|-- SMPL_NEUTRAL.pkl
|-- SMPL_MALE.pkl
- Prepare data for training. You can follow PHC to obtain the full AMASS dataset for imitation task and PACER to obtain the selected motions for AMP. The file structure should look like this:
| -- data
| -- amass
| -- pkls
| -- amass_isaac_gender_betas_unique.pkl
| -- amass_isaac_run_upright_slim.pkl
| -- amass_isaac_im_train_take6_upright_slim.pkl
-
City Environment: We have an example environment for the virtual city environment. This environment is built on top of the Matrix City. You can download the environment from here.
-
Checkpoint: The checkpoint for MLP policy is at Google Drive. You can download the checkpoint and put it in the
output
folder. The file structure should look like this:
| -- output
| -- mlp_slim_exp_v2
| -- Humanoid.pth
We train our policy on a single A100 GPU. We do not use Residual Force Control (RFC) and Residual PD for this work. We test our code on A100, RTX 3090 and RTX 4090. The training time is almost 3 days for our experiments.
export OMP_NUM_THREADS=1
python -u amp/run.py --task HumanoidPedestrianTerrainIm --cfg_env amp/data/cfg/release/mlp_slim_exp_v2.yaml --network_path output/mlp_slim_exp_v2 --headless --num_envs 2048 --num_threads 4
You can infer this policy through the following command:
export OMP_NUM_THREADS=1
python -u amp/run.py --task HumanoidPedestrianTerrainIm --cfg_env amp/data/cfg/release/mlp_slim_exp_v2.yaml --network_path output/mlp_slim_exp_v2 --rl_device cuda:0 --num_envs 1 --test --follow --epoch -1
If you find this work useful for your research, please cite our paper:
@inproceedings{Wang2024PacerPlus,
author={Jingbo Wang and Zhengyi Luo and Ye Yuan and Yixuan Li and Bo Dai},
title={PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
Also consider citing these prior works that are used in this project:
@inproceedings{Luo2023PerpetualHC,
author={Zhengyi Luo and Jinkun Cao and Alexander W. Winkler and Kris Kitani and Weipeng Xu},
title={Perpetual Humanoid Control for Real-time Simulated Avatars},
booktitle={International Conference on Computer Vision (ICCV)},
year={2023}
}
@inproceedings{rempeluo2023tracepace,
author={Rempe, Davis and Luo, Zhengyi and Peng, Xue Bin and Yuan, Ye and Kitani, Kris and Kreis, Karsten and Fidler, Sanja and Litany, Or},
title={Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
@inproceedings{wang2023learning,
title={Learning human dynamics in autonomous driving scenarios},
author={Wang, Jingbo and Yuan, Ye and Luo, Zhengyi and Xie, Kevin and Lin, Dahua and Iqbal, Umar and Fidler, Sanja and Khamis, Sameh},
booktitle={International Conference on Computer Vision (ICCV)},
year={2023}
}
@inproceedings{Luo2022EmbodiedSH,
title={Embodied Scene-aware Human Pose Estimation},
author={Zhengyi Luo and Shun Iwase and Ye Yuan and Kris Kitani},
booktitle={Advances in Neural Information Processing Systems},
year={2022}
}
@inproceedings{Luo2021DynamicsRegulatedKP,
title={Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation},
author={Zhengyi Luo and Ryo Hachiuma and Ye Yuan and Kris Kitani},
booktitle={Advances in Neural Information Processing Systems},
year={2021}
}
This repository is built on top of the following amazing repositories:
- Main code framework is from: IsaacGymEnvs
- Part of the SMPL_robot code is from: UHC
- SMPL models and layer is from: SMPL-X model
- Codes for imitation learning is from: PHC
- Codes for pedestrian simulation is from: PACER
- Codes for motion tracking on Waymo Open Dataset: LDA
- Dataset tools for virtual city environment: Matrix City
Please follow the lisence of the above repositories for usage.