A list of papers and resources dedicated to deep reinforcement learning.
Please note that this list is currently work-in-progress and far from complete.
- Add more and more papers
- Improve the way of classifying papers (tags may be useful)
- Create a policy of this list: curated or comprehensive, how to define "deep reinforcement learning", etc.
If you want to inform the maintainer of a new paper, feel free to contact @mooopan. Issues and PRs are also welcome.
- Deep Value Function
- Deep Policy
- Deep Actor-Critic
- Deep Model
- Application to Non-RL Tasks
- Unclassified
- S. Lange and M. Riedmiller, Deep Learning of Visual Control Policies, ESANN, 2010. pdf
- Deep Fitted Q-Iteration (DFQ)
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonglou, D. Wierstra, and M. Riedmiller, Playing Atari with Deep Reinforcement Learning, NIPS 2013 Deep Learning Workshop, 2013. pdf
- Deep Q-Network (DQN) with experience replay
- V. Mnih, K. Kavukcuoglu, D. Silver, A. a Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, Human-level control through deep reinforcement learning, Nature, 2015. pdf code
- Deep Q-Network (DQN) with experience replay and target network
- T. Schaul, D. Horgan, K. Gregor, and D. Silver, Universal Value Function Approximators, ICML, 2015. pdf
- A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, A. De Maria, M. Suleyman, C. Beattie, S. Petersen, S. Legg, V. Mnih, and D. Silver, Massively Parallel Methods for Deep Reinforcement Learning, ICML Deep Learning Workshop, 2015. pdf
- Gorila (General Reinforcement Learning Architecture)
- K. Narasimhan, T. Kulkarni, and R. Barzilay, Language Understanding for Text-based Games Using Deep Reinforcement Learning, EMNLP, 2015. pdf supplementary code
- LSTM-DQN
- M. Hausknecht and P. Stone, Deep Recurrent Q-Learning for Partially Observable MDPs, arXiv, 2015. arXiv code
- M. Lai, Giraffe: Using Deep Reinforcement Learning to Play Chess, arXiv. 2015. arXiv code
- H. van Hasselt, A. Guez, and D. Silver, Deep reinforcement learning with double q-learning, arXiv, 2015. arXiv
- Double DQN
- F. Zhang, J. Leitner, M. Milford, B. Upcroft, and P. Corke, Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control, in ACRA, 2015. pdf
- T. Schaul, J. Quan, I. Antonoglou, and D. Silver, Prioritized Experience Replay, arXiv, 2015. arXiv
- Z. Wang, N. de Freitas, and M. Lanctot, Dueling Network Architectures for Deep Reinforcement Learning, arXiv, 2015. arXiv
- V. François-Lavet, R. Fonteneau, and D. Ernst, How to Discount Deep Reinforcement Learning: Towards New Dynamic Strategies, NIPS Deep Reinforcement Learning Workshop, 2015. arXiv
- I. Sorokin, A. Seleznev, M. Pavlov, A. Fedorov, and A. Ignateva, Deep Attention Recurrent Q-Network, NIPS Deep Reinforcement Learning Workshop, 2015. arXiv
- A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell, Policy Distillation, arXiv, 2015. arXiv
- M. G. Bellemare, G. Ostrovski, A. Guez, P. S. Thomas, and R. Munos, Increasing the Action Gap: New Operators for Reinforcement Learning, AAAI, 2016. arXiv
- D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, Mastering the game of Go with deep neural networks and tree search, Nature, 2016. pdf
- T. Zahavy, N. Ben Zrihem, and S. Mannor, Graying the black box: Understanding DQNs, arXiv, 2016. arXiv
- J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson, Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks, arXiv, 2016. arXiv
- I. Osband, C. Blundell, A. Pritzel, and B. Van Roy, Deep Exploration via Bootstrapped DQN, arXiv, 2016. arXiv
- T. Salimans and D. P. Kingma, Weight Normalization : A Simple Reparameterization to Accelerate Training of Deep Neural Networks, arXiv, 2016. arXiv
- S. Gu, T. Lillicrap, I. Sutskever, and S. Levine, Continuous Deep Q-Learning with Model-based Acceleration, arXiv, 2016. arXiv
- J. Heinrich and D. Silver, Deep Reinforcement Learning from Self-Play in Imperfect-Information Games, arXiv, 2016. arXiv
- S. Levine, C. Finn, T. Darrell, and P. Abbeel, End-to-End Training of Deep Visuomotor Policies, arXiv, 2015. arXiv
- partially observed guided policy search
- J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abbeel, Trust Region Policy Optimization, ICML, 2015. pdf
- J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, High-Dimensional Continuous Control Using Generalized Advantage Estimation, arXiv, 2015. arXiv
- T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, Continuous control with deep reinforcement learning, arXiv, 2015. arXiv
- D. Balduzzi and M. Ghifary, Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies, arXiv, 2015. arXiv
- M. Hausknecht and P. Stone, Deep Reinforcement Learning in Parameterized Action Space, arXiv. 2015. arXiv
- N. Heess, J. J. Hunt, T. P. Lillicrap, and D. Silver, Memory-based control with recurrent neural networks, NIPS Deep Reinforcement Learning Workshop, 2015. arXiv
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, Asynchronous Methods for Deep Reinforcement Learning, arXiv, 2016. arXiv
- B. C. Stadie, S. Levine, and P. Abbeel, Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models, arXiv, 2015. arXiv
- J. Oh, X. Guo, H. Lee, R. Lewis, and S. Singh, Action-Conditional Video Prediction using Deep Networks in Atari Games, NIPS, 2015. arXiv
- J. M. Assael, W. Om, T. B. Schön, and M. P. Deisenroth, Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models, arXiv, 2015 arXiv
- N. Heess, G. Wayne, D. Silver, T. Lillicrap, Y. Tassa, and T. Erez, Learning Continuous Control Policies by Stochastic Value Gradients, NIPS, 2015. arXiv video
- J. Schmidhuber, On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models, arXiv, 2015. arXiv
- K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik, Learning Visual Predictive Models of Physics for Playing Billiards, ICLR, 2016. arXiv
- J. C. Caicedo and S. Lazebnik, Active Object Localization with Deep Reinforcement Learning, ICCV, 2015. pdf
- H. Guo, Generating Text with Deep Reinforcement Learning, arXiv, 2015. arXiv
- S. Hansen, Using Deep Q-Learning to Control Optimization Hyperparameters, arXiv, 2016. arXiv
- X. Guo, S. Singh, H. Lee, R. Lewis, and X. Wang, Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning, NIPS, 2014. pdf video
- S. Mohamed and D. J. Rezende, Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning, arXiv, 2015. arXiv
- Y. Liang, M. C. Machado, E. Talvitie, and M. Bowling, State of the Art Control of Atari Games Using Shallow Reinforcement Learning, arXiv, 2015. arXiv
- A. Tamar, S. Levine, and P. Abbeel, Value Iteration Networks, arXiv, 2016. arXiv