Skip to content
forked from msmbuilder/vde

Variational Autoencoder for Dimensionality Reduction of Time-Series

License

Notifications You must be signed in to change notification settings

shozebhaider/vde

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational Dynamical Encoder (VDE)

Often the analysis of time-dependent chemical and biophysical systems produces high-dimensional time-series data for which it can be difficult to interpret which features are most salient in defining the observed dynamics. While recent work from our group and others has demonstrated the utility of time-lagged co-variate models to study such systems, linearity assumptions can limit the compression of inherently nonlinear dynamics into just a few characteristic components. Recent work in the field of deep learning has led to the development of variational autoencoders (VAE), which are able to compress complex datasets into simpler manifolds. We present the use of a time-lagged VAE, or variational dynamics encoder (VDE), to reduce complex, nonlinear processes to a single embedding with high fidelity to the underlying dynamics. We demonstrate how the VDE is able to capture nontrivial dynamics in a variety of examples, including Brownian dynamics and atomistic protein folding. Additionally, we demonstrate a method for analyzing the VDE model, inspired by saliency mapping, to determine what features are selected by the VDE model to describe dynamics. The VDE presents an important step in applying techniques from deep learning to more accurately model and interpret complex biophysics.

Requirements

  • numpy
  • pytorch
  • msmbuilder

Usage

Using the VDE is as easy as using any msmbuilder model:

from vde import VDE
from msmbuilder.example_datasets import MullerPotential

trajs = MullerPotential().get().trajectories

lag_time = 10
vde_mdl = VDE(2, lag_time=lag_time, hidden_layer_depth=3,
          sliding_window=True, cuda=True, n_epochs=10,
          learning_rate=5E-4)

latent_output = vde_mdl.fit_transform(trajs)

Cite

If you use this code in your work, please cite:

@article{Hernandez2017,
   author = {{Hern{\'a}ndez}, C.~X. and {Wayment-Steele}, H.~K. and {Sultan}, M.~M. and 
	{Husic}, B.~E. and {Pande}, V.~S.},
    title = "{Variational Encoding of Complex Dynamics}",
  journal = {ArXiv e-prints},
archivePrefix = "arXiv",
   eprint = {1711.08576},
 primaryClass = "stat.ML",
 keywords = {Statistics - Machine Learning, Physics - Biological Physics, Physics - Chemical Physics, Physics - Computational Physics, Quantitative Biology - Biomolecules},
     year = 2017,
    month = nov,
   adsurl = {http://adsabs.harvard.edu/abs/2017arXiv171108576H},
}

About

Variational Autoencoder for Dimensionality Reduction of Time-Series

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.1%
  • Python 2.9%