You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Lunar Lander
TITLE
The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft.
GAME
The game was found in mob_fun.py. It is a demo of mob_spawner block - creates an arena, lines it with mob spawners of a given type, and then tries to keep an agent alive.
Mob spawners will continue to appear.
The agent will lose health if it is hit by the mob spawners. If the health value drops to 0 then the game ends. There are some apples distributed randomly in the arena, and the agent will get scores when eating apples. The purpose of our agent is to try to survive and get more scores. We may change some of its rules to be more suitable for our experiments.
HYPOTHESIS
Using Double DQN, Prioritized Replay and Dueling DQN can significantly improve scores and shorten agent training time compared with using Natural DQN.
If we combine value-based reinforcement learning algorithm with policy-based reinforcement learning algorithm on our AI agent, then it can get higher scores in less time than using either algorithm alone.
INDEPENDENT VARIABLE
Reinforcement learning algorithms used to train the agent
LEVELS OF INDEPENDENT VARIABLE AND NUMBERS OF REPEATED TRIALS
Simple Rules (Control)
DQN
Double DQN
Prioritized Replay
Dueling DQN
Policy Gradient
Actor Criti
3 times
3 times
3 times
3 times
3 times
3 times
3 times
DEPENDENT VARIABLE AND HOW MEASURED
The score that the agent gets at the end of the game, measured by the number of eaten apples.
Agent training time, measured by minute.
CONSTANTS
All agents are trained in games with the same size
All agents are trained in games with the same rules and scoring conditions
Game states are fully observable for all agents
All agents are trained and compared using the same computing resources
Reference
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.
Osband, I., Blundell, C., Pritzel, A., & Van Roy, B. (2016). Deep exploration via bootstrapped DQN. In Advances in neural information processing systems (pp. 4026-4034).
Ontanón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., & Preuss, M. (2013). A survey of real-time strategy game ai research and competition in starcraft. IEEE Transactions on Computational Intelligence and AI in games, 5(4), 293-311.
Sutton, R. S., McAllester, D. A., Singh, S. P., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems (pp. 1057-1063).
The text was updated successfully, but these errors were encountered:
ZhenxiangWang
changed the title
AI for Survival Game of MineCraft
The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft
Mar 31, 2018
The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Lunar Lander
TITLE
The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft.
GAME
The game was found in mob_fun.py. It is a demo of mob_spawner block - creates an arena, lines it with mob spawners of a given type, and then tries to keep an agent alive.
Mob spawners will continue to appear.
The agent will lose health if it is hit by the mob spawners. If the health value drops to 0 then the game ends. There are some apples distributed randomly in the arena, and the agent will get scores when eating apples. The purpose of our agent is to try to survive and get more scores. We may change some of its rules to be more suitable for our experiments.
HYPOTHESIS
INDEPENDENT VARIABLE
Reinforcement learning algorithms used to train the agent
LEVELS OF INDEPENDENT VARIABLE AND NUMBERS OF REPEATED TRIALS
DEPENDENT VARIABLE AND HOW MEASURED
CONSTANTS
Reference
The text was updated successfully, but these errors were encountered: