Skip to content

TensorFlow Lite is used to create new humanoid poses from a variational autoencoder trained on motion capture data. Sequences of poses are generated to choreograph a dance. Audio reactive VFX are made with Unity3D

License

Notifications You must be signed in to change notification settings

smaerdlatigid/LatentMotion-VFX

Repository files navigation

Latent Motion

One of the first choreographed performances by an A.I. This project explores the representation of motion capture data through the eyes of a machine learning algorithm. A type of neural network, a variational autoencoder, is used to generate humanoid poses. A dance is choreographed in real-time and audio-reactive VFX are added with Unity3D.

A dance is choreographed by sampling the latent space with a time varying Lissajous curve.

A video with audio: https://www.instagram.com/p/CAUDdFuFqVE/

https://www.instagram.com/p/CAX0-uxBuW5/

This project is synthesis of:

  • Animation Autoencoder - a variational autoencoder is trained on motion capture data. Poses are sampled from the latent space. Built with TensorFlow Lite

  • Smrvfx is a Unity sample project that shows how to use an animated skinned mesh as a particle source in a visual effect graph.

  • WASAPI - Audio reactive visual effects are created using Windows Audio and Sound API

Created with Unity 2019.3. Please note that it's not compatible with the previous versions of Unity.

About

TensorFlow Lite is used to create new humanoid poses from a variational autoencoder trained on motion capture data. Sequences of poses are generated to choreograph a dance. Audio reactive VFX are made with Unity3D

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published