PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.
PFRL is tested with 3.5.1+. For other requirements, see requirements.txt.
PFRL can be installed via PyPI:
pip install pfrl
It can also be installed from the source code:
python setup.py install
or
pip install .
Refer to Installation for more information on installation.
You can try PFRL Quickstart Guide first, or check the examples ready for Atari 2600 and Open AI Gym.
For more information, you can refer to PFRL's documentation.
Algorithm | Discrete Action | Continous Action | Recurrent Model | Batch Training | CPU Async Training |
---|---|---|---|---|---|
DQN (including DoubleDQN etc.) | ✓ | ✓ (NAF) | ✓ | ✓ | x |
Categorical DQN | ✓ | x | ✓ | ✓ | x |
Rainbow | ✓ | x | ✓ | ✓ | x |
IQN | ✓ | x | ✓ | ✓ | x |
DDPG | x | ✓ | ✓ | ✓ | x |
A3C | ✓ | ✓ | ✓ | ✓ (A2C) | ✓ |
ACER | ✓ | ✓ | ✓ | x | ✓ |
PPO | ✓ | ✓ | ✓ | ✓ | x |
TRPO | ✓ | ✓ | ✓ | ✓ | x |
TD3 | x | ✓ | x | ✓ | x |
SAC | x | ✓ | x | ✓ | x |
Following algorithms have been implemented in PFRL:
- A2C (Synchronous variant of A3C)
- examples: [atari (batched)]
- A3C (Asynchronous Advantage Actor-Critic)
- examples: [atari reproduction] [atari]
- ACER (Actor-Critic with Experience Replay)
- examples: [atari]
- Categorical DQN
- examples: [atari] [general gym]
- DQN (Deep Q-Network) (including Double DQN, Persistent Advantage Learning (PAL), Double PAL, Dynamic Policy Programming (DPP))
- DDPG (Deep Deterministic Policy Gradients) (including SVG(0))
- examples: [mujoco reproduction]
- IQN (Implicit Quantile Networks)
- examples: [atari reproduction]
- PPO (Proximal Policy Optimization)
- examples: [mujoco reproduction] [atari]
- Rainbow
- examples: [atari reproduction]
- REINFORCE
- examples: [general gym]
- SAC (Soft Actor-Critic)
- examples: [mujoco reproduction]
- TRPO (Trust Region Policy Optimization) with GAE (Generalized Advantage Estimation)
- examples: [mujoco reproduction]
- TD3 (Twin Delayed Deep Deterministic policy gradient algorithm)
- examples: [mujoco reproduction]
- HIRO (Data Efficient Hiearchical Reinforcement Learning with Off Policy Correction
- examples: [ant env in Mujoco]
Following useful techniques have been also implemented in PFRL:
- NoisyNet
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Prioritized Experience Replay
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Dueling Network
- examples: [Rainbow] [DQN/DoubleDQN/PAL]
- Normalized Advantage Function
- examples: [DQN] (for continuous-action envs only)
- Deep Recurrent Q-Network
- examples: [DQN]
Environments that support the subset of OpenAI Gym's interface (reset
and step
methods) can be used.
Any kind of contribution to PFRL would be highly appreciated! If you are interested in contributing to PFRL, please read CONTRIBUTING.md.
To cite PFRL in publications, cite our predecessor library:
@InProceedings{fujita2019chainerrl,
author = {Fujita, Yasuhiro and Kataoka, Toshiki and Nagarajan, Prabhat and Ishikawa, Takahiro},
title = {ChainerRL: A Deep Reinforcement Learning Library},
booktitle = {Workshop on Deep Reinforcement Learning at the 33rd Conference on Neural Information Processing Systems},
location = {Vancouver, Canada},
month = {December},
year = {2019}
}