Skip to content

shreeram-murali/dual-clip-ppo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dual-clip Proximal Policy Optimization (PPO)

An implementation of a baseline PPO, benchmarked against an extension dual-clip PPO [1] where policy divergences due to negative advantages are also clipped.

References

[1]: Ye Deheng, Liu Zhao, Sun Mingfei, Shi Bei, Zhao Peilin, Wu Hao, Yu Hongsheng, Yang Shaojie, Wu Xipeng, Guo Qingwei, et al. "Mastering complex control in moba games with deep reinforcement learning" Proceedings of the AAAI conference on artificial intelligence, Vol. 34 (2020), pp. 6672-6679 — arXiv

About

an implementation of dual-clip PPO

Topics

Resources

Stars

Watchers

Forks