Welcome to the Deep Convolutional Q-Learning for Pac-Man project! This project implements a Deep Reinforcement Learning model using Convolutional Neural Networks (CNNs) to train an AI agent capable of playing the classic Atari game, Pac-Man, using Deep Q-Learning (DQN). This solution leverages OpenAI's Gym and Gymnasium environments.
- Project Overview
- Getting Started
- Requirements
- Installation
- How It Works
- Training the Agent
- Results
- Contributing
- License
Pac-Man is one of the most popular arcade games, and this project aims to train an AI agent to master the game using a combination of Convolutional Neural Networks and Q-Learning. Deep Q-Learning allows the agent to learn from game states and rewards, optimizing its decision-making over time.
The key objectives of this project are:
- To implement a reinforcement learning framework to play Pac-Man.
- To train a CNN-based Q-learning agent using the Pac-Man Atari environment.
- To achieve a balance between exploration and exploitation for effective learning.
To get started, you will need to install the required libraries, including gymnasium
, atari-py
, and other dependencies that allow interaction with the Pac-Man environment. Follow the installation instructions below to set up your environment.
Ensure you have the following dependencies installed:
- Python 3.10+
- Gymnasium
- TensorFlow or PyTorch (for implementing the neural network)
- Numpy
- OpenAI Gym's Atari environments
Additional Libraries:
gymnasium[accept-rom-license, atari]
(for Atari game environments)pygame
(for rendering the game)box2d-py
(for simulation)
To set up and run the project locally, follow these steps:
- Clone the Repository:
git clone https://github.com/yourusername/pacman-dqn.git cd pacman-dqn
- Install Dependencies:
pip install gymnasium pip install "gymnasium[atari, accept-rom-license]" apt-get install -y swig pip install gymnasium[box2d]
- Download Atari ROMs:
python -m atari_py.import_roms /path/to/your/roms
- Verify Installation:
import gymnasium as gym env = gym.make("ALE/Pacman-v5") env.reset()
Deep Q-Learning (DQN) is a reinforcement learning algorithm that uses a neural network to approximate the Q-value function. The Q-value function predicts the expected future reward for taking a particular action in a given state. The neural network is trained on game frames (states) to minimize the difference between predicted Q-values and target Q-values, learned through gameplay.
The CNN extracts features from raw pixel input (frames) of the Pac-Man game. These features represent game states, which the Q-network uses to decide on actions.
We use the Gymnasium
library, which provides access to the Pac-Man Atari environment:
- State: The screen pixel data of the game.
- Action Space: Possible movements of Pac-Man (up, down, left, right).
- Rewards: Positive rewards for collecting points, negative rewards for losing a life.
To train the DQN agent, follow these steps:
```python
import gymnasium as gym
env = gym.make("ALE/Pacman-v5")
state = env.reset()
Create a CNN-based Q-network to approximate the Q-value function. This network takes in the game state (a series of frames) and outputs Q-values for each possible action.
- Initialize replay memory and store the agent’s experiences.
- For each episode:
- Get the current state.
- Choose an action based on the epsilon-greedy policy.
- Execute the action, receive a reward, and observe the next state.
- Store the transition in replay memory.
- Update the Q-network by sampling mini-batches from the replay memory.
After training, evaluate the agent’s performance by letting it play several episodes and observe its score.
The training process produces a well-optimized agent that can play Pac-Man efficiently. The DQN agent learns to:
- Avoid ghosts.
- Collect pellets and power-ups.
- Maximize score through strategic movement.
Results of the trained agent, including performance graphs and gameplay videos, can be found in the results
directory.
Contributions are welcome! If you'd like to contribute to the project, please follow these steps:
- Fork the repository.
- Create a new branch:
git checkout -b feature-branch
. - Make your changes.
- Submit a pull request.
Feel free to reach out if you have any questions or suggestions for improvements. Happy coding and happy gaming!
This project is licensed under the MIT License - see the LICENSE file for details.