Creating and Training an AI to Play No Limit Holdem Heads-Up Poker through Deep Reinforcement Self-Play Learning with Tensorflow/Keras
I am building the poker game from scratch as an environment an agent can later use to train in. I am also building a user-interface for my friend (and myself) to play against the AI. The actual model will be a neural network generated with tensorflow-keras. It is going to learn through deep-reinforcement learning by playing against itself. I start with a deterministic approach (deep-q learning). Taking into account that the environment in poker is probabilistic, I will later on try to implement a stochastic approach. State-of-the-art poker theory suggests that probabilistic strategies are superior to deterministic strategies. I plan to also host a video of my friend playing against the AI. In this folder you can find all the python files necessary to run all the machine learning code. There are a few Jupyter Notebook that are hopefully self-explanatory. The actual deep-learning process can be found in the Jupyter Notebook called "PokerAI10BB.ipynb". The trained models folder contains the current trained neural network. The files necessary to run the poker game are
- Agent2: Class that interacts with the poker environment
- PokerGame: Poker Environment to train on
- Poker Hand Strengths: Entails the compare function which evaluates two poker hands on the river
- StrengthEvaluator2: Evaluator Class which can evaluate the hand strengths on all boards and return the players winning and losing probabilities against random hands as well as winning probabilities and standard deviations of avarage hands
- StrengthVillainFlop - StrangthVillainRiver: Jupyter Notebooks on which the strength Evaluator is trained