Currently, there are only the codes for distributional reinforcement learning here.
The codes for C51, QR-DQN, and IQN are a slight change from sungyubkim.
QUOTA is implemented based on the work of the algorithm's author: Shangtong Zhang.
Always up for a chat -- shoot me an email ([email protected]) if you'd like to discuss anything.
- pytorch(>=1.0.0)
- gym(=0.10.9)
- numpy
- matplotlib
In order to run my code, you need to create two subdirectories under the main directory: ./data/model/ & ./data/plots/. These two directories are used to store the data.
When your computer's python environment satisfies the above dependencies, you can run the code. For example, enter:
python 3_iqn.py Breakout
on the command line to run the algorithms in the Atari environment. You can change some specific parameters for the algorithms inside the codes.
After training, you can plot the results by running result_show.py with appropriate parameters.
-
Human-level control through deep reinforcement learning (DQN) [Paper] [Code]
-
A Distributional Perspective on Reinforcement Learning (C51) [Paper] [Code]
-
Distributional Reinforcement Learning with Quantile Regression (QR-DQN) [Paper] [Code]
-
Implicit Quantile Networks for Distributional Reinforcement Learning (IQN) [Paper] [Code]
-
QUOTA: The Quantile Option Architecture for Reinforcement Learning (QUOTA) [Paper] [Code]