Skip to content

Latest commit

 

History

History
34 lines (18 loc) · 1.42 KB

README.md

File metadata and controls

34 lines (18 loc) · 1.42 KB

Latent Motion

One of the first choreographed performances by an A.I. This project explores the representation of motion capture data through the eyes of a machine learning algorithm. A type of neural network, a variational autoencoder, is used to generate humanoid poses. A dance is choreographed in real-time and audio-reactive VFX are added with Unity3D.

A dance is choreographed by sampling the latent space with a time varying Lissajous curve.

A video with audio: https://www.instagram.com/p/CAUDdFuFqVE/

https://www.instagram.com/p/CAX0-uxBuW5/

This project is synthesis of:

  • Animation Autoencoder - a variational autoencoder is trained on motion capture data. Poses are sampled from the latent space. Built with TensorFlow Lite

  • Smrvfx is a Unity sample project that shows how to use an animated skinned mesh as a particle source in a visual effect graph.

  • WASAPI - Audio reactive visual effects are created using Windows Audio and Sound API

Created with Unity 2019.3. Please note that it's not compatible with the previous versions of Unity.