Skip to content

github-throwaway/ARL-Model-RL-Unsicherheit

Repository files navigation

UNCERT: Semi-Model-Based RL with Uncertainty

Gitpod ready-to-code Code style: black

This project was developed by Simon Lund, Sophia Sigethy, Georg Staber, and Malte Wilhelm for the Applied Reinforcement Learning SS 21 course at LMU.

Cover image

📒 Index

📝 Deliverables

As part of the course we created an extensive report as well as a final presentation of the project.

📹 Videos

The RL agent swings up using either side.

cartpole_75k_cos.mp4

The RL agent avoids the noisy section on the left and swings up on the right side.

cartpole_75k_cos_uncert.mp4

⚙️ Installation

Open in Gitpod

Start a development environment in your browser by clicking the button above. This gets you going quickly, but does not include a graphical output from the gym enironment.

Local Installation

git clone https://github.com/github-throwaway/ARL-Model-RL-Unsicherheit.git
cd ARL-Model-RL-Unsicherheit/
pip install -r requirements.txt # or python setup.py install

How to run

🙂 Simple

Uses preconfigured system with trained model and agent with default configuration.

cd src/
python main.py

🏆 Advanced

For the sake of usability, we implemented an argument parser. By passing some predefined arguments to the python program call, it is possible to start different routines and also change hyperparameters needed by the algorithms. This enables the user to run multiple tests with different values without making alterations to the code. This is especially helpful when fine-tuning hyperparameters for reinforcement learning algorithms, like PPO. To get an overview of all the possible arguments, and how these arguments can be used, the user may call python main.py --help.

🛠️ Configuration

The project was evaluated using the following parameters.

Parameter Value
noisy sector 0 - π (left half of unit circle)
noise offset 0.3
observation space continuous, 5 dimensional (xpos, xdot, theta dot, theta sin, theta xos)
action space discrete, 10 actions
NN epochs 100
time series length 4
reward function [centered, right, boundaries, best, cos, xpos_theta_uncert]
RL algorithms PPO

📚 Sources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •