Skip to content

A neural network that predicts time-optimal dynamical states for motion primitive generation

Notifications You must be signed in to change notification settings

grkw/endstate_selector_model

Repository files navigation

primitive-expansion

Endstate Selector

As an undergraduate research assistant in the UCLA Verifiable & Control-Theoretic Robotics Laboratory (VECTR), I contributed to a novel motion planning strategy to intelligently sample the state space using motion primitives when given a coarse reference path generated by a discrete graph-based search algorithm. While my work did not end up in the final paper, it contributed to the development of the planner.

By exploiting key information inherent to the reference path, the proposed planner can strategically sample motion primitives in regions known to make progress to the goal, leading to significant improvements in computation time and robustness in trajectory generation. Additionally, this framework allows for sampling in a higher dimensional state space, i.e. acceleration, which contributes to smooth kinodynamic path plans. However, the number of primitives that the planner must generate grows exponentially with the number of path waypoints and the number of state samplings. Thus, the planner quickly becomes infeasible for online deployment. The first half of my thesis presents an in-depth analysis of selected primitives and several algorithms for benchmarking performance (greedy, greedy lookahead, and random). The second half presents initial results using a fully-connected neural network in the behavior cloning (supervised learning) paradigm.

You can browse my final report and research update slides.

Contributions

The majority of my code contributions are in a private repo on the VECTR lab account. My capstone thesis makes the following contributions:

  • Implementation and analysis of greedy, greedy-lookahead, and random state sampling strategies
  • An in-depth analysis of selected state samples
  • Infrastructure for learning-enabled state sampling features
  • A simple fully-connected neural network for state sampling, trained using the behavior cloning paradigm of imitation learning (this repo)

Architecture

  • 2 Fully-connected hidden layers (including Batch normalization, ReLU activation, Dropout regularization)
  • Softmax output layer
nn_IO

Initial results

With 37 different classification options, random guessing accuracy would yield a 2.7% accuracy rate. After training on a small dataset with ~100 training examples per class, I attained a test accuracy of 8.0%. Although this number is far below 100%, it shows that learning is possible.

train_loss

Acknowledgements

About

A neural network that predicts time-optimal dynamical states for motion primitive generation

Topics

Resources

Stars

Watchers

Forks