Learning Hamiltonian Dynamics with symlectic time-reversible neural networks
Creator: Riccardo Valperga.
(a) Simple MLP. (b) Symplectic Neural Networks. (c) Time-reversible neural networks (ours).
A JAX-based implementation of time-reversible neural networks from Valperga et al. 2022
Table of Contents
Setup an environment with python>=3.9
(earlier versions have not been tested).
The following packages are installed very differently based on the system configuration being used. For this reason, they are not included in the requirements.txt
file:
jax >= 0.4.11
. See Official Jax installation guide for more details.pytorch
. CPU version is suggested because it is only used for loading the data more. See Official Pytorch installation guide.
After having installed the above packages, run:
pip install -r requirements.txt
These are all the packages used to run.
Consider the time-periodic Hamiltonian
This Hamiltonian represents a simple pendulum with natural frequency ν driven by a
The repository is structured as follows:
./config
. Configuration files for the tasks../pendulum_data
. Dataset of state space variables from a simulated driven pendulum../dataset
. Package that generates the dataset../models
. Package of the models. It includes Hénon maps and normalizing flows.train.py
. Train time-reversible neural networks.test.py
. Tests forecasting.
From the root folder run:
python train.py --config=config/config.py:train --config.wandb.wandb_log=False --config.dataset.batch_size=150 --config.dataset.train_lines=150 --config.dataset.num_lines=200 --config.model.num_layers_flow=5 --config.model.num_layers=2 --config.model.num_hidden=32 --config.model.d=1 --config.train.num_epochs=10000 --config.train.lr=0.001 --config.seed=42
In particular:
-
-config.wandb.wanbb_log=False
: Wandb logging. -
--config.dataset.train_lines=150
: with this, we use the first 150 lines in the./pendulum_data
as training points. -
--config.dataset.batch_size=150
: the traoning batch size. In this case it is full-batch. -
--config.dataset.num_lines=200
: the total number of points in./pendulum_data
. In this case 50 points will be used for evaluation. -
--config.model.num_layers_flow=5
: number of layers in the flow. -
--config.model.num_layers=2
: number of layers in the MLP used to construct flow layers.
To test the trained model:
python test.py --config=config/config.py:test_last --config.model.num_layers_flow=5 --config.model.num_layers=2 --config.model.num_hidden=32 --config.model.d=1
Checkpoint from training are saved in ./checkpoints
using date and time as name. -config=config/config.py:test_last
runs the most recent one. To run a specific one use, for example -config=config/config.py:test_last --config.date_and_time = "2023-10-10_19-15-40"
.
Distributed under the MIT License. See LICENSE
for more information.