Skip to content

Re-implementation of reinforcement learning based quadcopter control in gym-pybullet-drones.

License

Notifications You must be signed in to change notification settings

danielbinschmid/RL-pybullets-cf

Repository files navigation

Reinforcement Learning for Quadcopter Control

This repository is a fork of gym-pybullet-drones and implements a reinforcement learning based control policy inspired by Penicka et al. [1].

Documentation

For documenation and a summary of the results, see our 4-pages whitepaper.

Result

RL Control Result

  • The drone can follow arbitrary trajectories.
  • It is given two next target waypoints as observation. If the two target waypoints are close, it will reach the target slower.
  • The learned policy corresponds to the obtained result after slow-phase training in Penicka et al. [1].

Implemented New Features

  • Reward implementation of RL policy proposed by Penicka et al. [1].
  • Attitude control action type. In gym-pybullet-drones, only motor-level control using PWM signals is implemented. This repository extends the original implementation and adds a wrapper for sending attitude commands (thrust and bodyrates).
  • Random trajectory generation using polynomial minimum snap trajectory generation using large_scale_traj_optimizer [2] for training and test set generation. Implementation in trajectories subfolder.
  • Scripts for bechmarking the policy by computing basic benchmarks such as mean and max deviation from the target trajectory and time until completion.

Setup

Tested on ArchLinux and Ubuntu. Note that Eigen must be installed on the system. On linux, install via your package manager. E.g. on Ubuntu:

$ sudo apt-get install libeigen3-dev

It is strongly recommended to use a python virtual environment such as conda or pyvenv.

  1. Initialise repository. Repository must be pulled recursively
$ git clone [email protected]:danielbinschmid/RL-pybullets-cf.git
$ git submodule --init --recursive
  1. Initialise virtual environment. Tested with python version 3.10.13. E.g.:
$ pyenv install 3.10.13
$ pyenv local 3.10.13
$ python3 -m venv ./venv
$ source ./venv/bin/activate
$ pip3 install --upgrade pip
  1. Install dependencies and build
$ pip3 install -e . # if needed, `sudo apt install build-essential` to install `gcc` and build `pybullet`

Usage

Scripts for training, testing and visualization are provided.

Training

To train the RL policy from scratch with our implementation, run

$ cd runnables
$ ./train_rl.sh

It will produce a folder with the weights. Later, this weights folder can be passed to the visualization and testing scripts.

Testing

To run our small benchmark suite, run

$ cd runnables
$ ./test_rl.sh
$ ./test_pid.sh

Out of the box, it will use our pre-trained weights. Each bash script produces a .json file with the benchmarks.

Visualization

To just visualize the control policy, run

$ cd runnables
$ ./vis_rl.sh

Out of the box, it will use our pre-trained weights and randomly generated trajectories.

Evaluation track generation

To generate a test set with random tracks, run

$ cd runnables/utils
$ python gen_eval_tracks.py

Plot generation

To generate the plots used in our whitepaper, run

$ cd runnables
$ ./generate_plots.sh

Dev

  • Autoformatting with black.

Test

Run all tests from the top folder with

pytest tests/

Common Issues

References

About

Re-implementation of reinforcement learning based quadcopter control in gym-pybullet-drones.

Resources

License

Stars

Watchers

Forks

Packages

No packages published