This project trains a single agent to work with the Reacher environment (first version). The Reacher environment simulates the controlling of a double-jointed arm, to reach target locations.
In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of the agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.
The task is episodic, and in order to solve the environment, the agent must get an average score of +30 over 100 consecutive episodes
Follow the instructions in the DRLND GitHub repository to set up your Python environment. These instructions can be found in README.md at the root of the repository. By following these instructions, you will install PyTorch, the ML-Agents toolkit, and a few more Python packages required to complete the project.
(For Windows users) The ML-Agents toolkit supports Windows 10. While it might be possible to run the ML-Agents toolkit using other versions of Windows, it has not been tested on other versions. Furthermore, the ML-Agents toolkit has not been tested on a Windows VM such as Bootcamp or Parallels.
For this project, you will not need to install Unity - this is because the environment is already built for you, and you can download it from one of the links below. You need only select the environment that matches your operating system:
Version 1: One (1) Agent
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here - Note: The Agent was implemented and trained on this version !
Currently Version 2 is not implemented (right now will not work, but is available for future implementations):
Version 2: Twenty (20) Agents
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
Then, place the file in the p2_continuous-control/
folder in the DRLND GitHub repository, and unzip (or decompress) the file.
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
Follow the instructions in Continuous_Control_PPO_win.ipynb
or Continuous_Control_PPO_Linux.ipynb
to train your own agent or test the already trained agent:
- Start the Environment - currently works only with version 1 (single agent) of the environment
- Examine the State and Action Spaces
- Optionally take Random Actions in the Environment
- Train the Agent (Check Further Modifications section or optionally you can skip to 5. Testing)
- Test trained Agent
In searching for better performance you can modify:
-
Training process by modifying
hyperparameters
(check Report.md for my hyperparameters search history)'episode_count': 1500, 'discount_rate': 0.95, 'gradient_clip': 15, 'buffer_size': 3072, 'optimization_epochs': 2, 'ppo_clip': 0.2, 'batch_size': 512, 'adam_learning_rate': 3e-4, 'adam_epsilon': 1e-4
-
Modifying any part of Agent's Neural Network's Policy architecture in section 4.1.
Actor = [state_size, 256] -> ReLU -> [ 64] -> ReLU -> [action_size] -> tanh Critic =[state_size, 512] -> ReLU -> [256] -> ReLU -> [64] -> [1] PPO_Policy - Normal Distribution with Standard Deviation of 1
-
Implementing different Policy Search Algorithm (like DDPG, A2C and others)
-
Implement Multiagent version.
-
It would be interesting to try Attention mechanism for this kind of problem.
For more information check the Report.md