Skip to content

Latest commit

 

History

History
25 lines (19 loc) · 688 Bytes

README.md

File metadata and controls

25 lines (19 loc) · 688 Bytes

Description

Comparison of DQN and its variants.

Usage

  1. Specify hyper-parameters in config.py.
  2. Run ./run.sh $device $algorithm for training with three random seeds.
  3. Visualize the results by python plot.py *log*.

Result on Breakdown

Cite

@book{deepRL-2020,
 title={Deep Reinforcement Learning: Fundamentals, Research, and Applications},
 editor={Hao Dong, Zihan Ding, Shanghang Zhang},
 author={Hao Dong, Zihan Ding, Shanghang Zhang, Hang Yuan, Hongming Zhang, Jingqing Zhang, Yanhua Huang, Tianyang Yu, Huaqing Zhang, Ruitong Huang},
 publisher={Springer Nature},
 note={\url{http://www.deepreinforcementlearningbook.org}},
 year={2020}
}