Skip to content

Commit

Permalink
cleans up deep arch and 3term loss examples (nu learning will require…
Browse files Browse the repository at this point in the history
… more training)
  • Loading branch information
alizma committed Dec 2, 2023
1 parent 422b454 commit cbc2382
Show file tree
Hide file tree
Showing 7 changed files with 49 additions and 30 deletions.
1 change: 0 additions & 1 deletion drdmannturb/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,4 @@
OnePointSpectra,
OnePointSpectraDataGenerator,
PowerSpectraRDT,
SpectralCoherence,
)
5 changes: 4 additions & 1 deletion drdmannturb/spectra_fitting/calibration.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from collections.abc import Iterable
from functools import partial
from pathlib import Path
from typing import Any, Optional, Union
from typing import Optional, Union

import matplotlib.pyplot as plt
import numpy as np
Expand Down Expand Up @@ -454,6 +454,9 @@ def closure():
print("=" * 40)
print(f"Spectra fitting concluded with final loss: {self.loss.item()}")

if self.prob_params.learn_nu and hasattr(self.OPS, "tauNet"):
print(f"Learned nu value: {self.OPS.tauNet.Ra.nu.item()}")

return self.parameters

# ------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion drdmannturb/spectra_fitting/data_generator.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import warnings
from pathlib import Path
from typing import Any, Iterable, Optional, Tuple, Union
from typing import Iterable, Optional, Tuple, Union

import matplotlib.pyplot as plt
import numpy as np
Expand Down
4 changes: 3 additions & 1 deletion examples/plot_custom_data_fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,5 +96,7 @@
# %%
pb.plot()

# %%
# The training logs can be accessed from the logging directory
# with Tensorboard utilities, but we also provide a simple internal utility for a single
# training log plot.
pb.plot_losses(run_number=0)
28 changes: 17 additions & 11 deletions examples/plot_synthetic_3term_loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@
==================================================
This example is nearly identical to the Synthetic Data fit, however we
use a more sophisticated loss function, introducing now a regularization
term.
use a more sophisticated loss function, introducing an additional first-order
penalty term. The previous synthetic fit relied only on MSE loss and a second-order penalty.
All other models remain the same: Mann turbulence under the Kaimal spectra.
See again the `original DRD paper <https://arxiv.org/abs/2107.11046>`_.
"""
Expand All @@ -17,7 +19,6 @@
# First, we import the packages we need for this example. Additionally, we choose to use
# CUDA if it is available.

import numpy as np
import torch
import torch.nn as nn

Expand All @@ -27,7 +28,6 @@
PhysicalParameters,
ProblemParameters,
)

from drdmannturb.spectra_fitting import CalibrationProblem, OnePointSpectraDataGenerator

device = "cuda" if torch.cuda.is_available() else "cpu"
Expand Down Expand Up @@ -68,7 +68,6 @@
)

##############################################################################
# %%
# In the following cell, we construct our :math:`k_1` data points grid and
# generate the values. ``Data`` will be a tuple ``(<data points>, <data values>)``.
# It is worth noting that the second element of each tuple in ``DataPoints`` is the
Expand All @@ -79,18 +78,25 @@
Data = OnePointSpectraDataGenerator(data_points=DataPoints).Data

##############################################################################
# %%
# Now, we fit our model. ``CalibrationProblem.calibrate()`` takes the tuple ``Data``
# Calibration
# -----------
# Now, we fit our model. ``CalibrationProblem.calibrate`` takes the tuple ``Data``
# which we just constructed and performs a typical training loop.
optimal_parameters = pb.calibrate(data=Data)

##############################################################################
# %%
# Lastly, we'll used built-in plotting utilities to see the fit result.
# Plotting
# --------
# ``DRDMannTurb`` offers built-in plotting utilities and Tensorboard integration
# which make visualizing results and various aspects of training performance
# very simple. The training logs can be accessed from the logging directory
# with Tensorboard utilities, but we also provide a simple internal utility for a single
# training log plot.
#
# The following will plot our fit.
pb.plot()

##############################################################################
# %%
# This plots the loss function terms as specified, each multiplied by the
# This plots out the loss function terms as specified, each multiplied by the
# respective coefficient hyperparameter.
pb.plot_losses(run_number=0)
32 changes: 19 additions & 13 deletions examples/plot_synthetic_deep_arch.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,12 @@
=====================================
This example is nearly identical to the Synthetic Data fit, however we use
a more complicated neural network architecture.
a different neural network architecture in hopes of obtaining a better spectra fitting.
The same set-up using the Mann model under the Kaimal spectra is used here as in other synthetic
data fitting examples. The only difference here is in the neural network architecture.
Although certain combinations of activation functions, such as ``GELU`` result in considerably
improved spectra fitting and terminal loss values, the resulting eddy lifetime functions are
usually non-physical.
See again the `original DRD paper <https://arxiv.org/abs/2107.11046>`_.
"""
Expand All @@ -16,11 +21,9 @@
#
# First, we import the packages we need for this example. Additionally, we choose to use
# CUDA if it is available.
import numpy as np
import torch
import torch.nn as nn

from drdmannturb.enums import DataType, EddyLifetimeType, PowerSpectraType
from drdmannturb.parameters import (
LossParameters,
NNParameters,
Expand All @@ -29,7 +32,6 @@
)
from drdmannturb.spectra_fitting import CalibrationProblem, OnePointSpectraDataGenerator


device = "cuda" if torch.cuda.is_available() else "cpu"

if torch.cuda.is_available():
Expand Down Expand Up @@ -57,17 +59,21 @@
#
# Compared to the first Synthetic Fit example, as noted already, we are using
# a more complicated neural network architecture. This time, specifically, our
# network will have 5 layers of width 5, 10, 20, 10, 5 respectively, and we
# alternate between ``GELU`` and ``RELU`` activations. Additionally, we have
# network will have 4 layers of width 5, 10, 20, 10, 5 respectively, and we
# alternate between ``GELU`` and ``RELU`` activations. We have
# prescribed more Wolfe iterations.
# Finally, this task is considerably more difficult than before since the exponent of
# the eddy lifetime function :math:`\nu` is to be learned. Much more training
# may be necessary to obtain a close fit approximating :math:`\nu = -1/3`.

pb = CalibrationProblem(
nn_params=NNParameters(
nlayers=5,
hidden_layer_sizes=[5, 10, 20, 10, 5],
activations=[nn.GELU(), nn.ReLU(), nn.GELU(), nn.ReLU(), nn.GELU()],
nlayers=4,
# Specifying the activations is done similarly.
hidden_layer_sizes=[20, 20, 20, 20],
activations=[nn.ReLU(), nn.GELU(), nn.GELU(), nn.ReLU()],
),
prob_params=ProblemParameters(nepochs=5, wolfe_iter_count=30, learn_nu=True),
prob_params=ProblemParameters(nepochs=20, wolfe_iter_count=50, learn_nu=True),
loss_params=LossParameters(alpha_pen2=1.0, beta_reg=1.0e-5),
phys_params=PhysicalParameters(
L=L, Gamma=Gamma, sigma=sigma, Uref=Uref, domain=domain
Expand Down Expand Up @@ -103,7 +109,7 @@

##############################################################################
# This plots the loss function terms as specified, each multiplied by the
# respective coefficient hyperparameter.


# respective coefficient hyperparameter. The training logs can be accessed from the logging directory
# with Tensorboard utilities, but we also provide a simple internal utility for a single
# training log plot.
pb.plot_losses(run_number=0)
7 changes: 5 additions & 2 deletions examples/plot_synthetic_fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,8 @@
#
# Using the ``ProblemParameters`` dataclass, we indicate the eddy lifetime function
# :math:`\tau` substitution, that we do not intend to learn the exponent :math:`\nu`,
# and that we would like to train for 10 epochs.
# and that we would like to train for 10 epochs, or until the tolerance ``tol`` loss (0.001 by default),
# whichever is reached first.
#
# Having set our physical parameters above, we need only pass these to the
# ``PhysicalParameters`` dataclass just as is done below.
Expand Down Expand Up @@ -131,7 +132,9 @@

##############################################################################
# This plots out the loss function terms as specified, each multiplied by the
# respective coefficient hyperparameter.
# respective coefficient hyperparameter. The training logs can be accessed from the logging directory
# with Tensorboard utilities, but we also provide a simple internal utility for a single
# training log plot.
pb.plot_losses(run_number=0)
##############################################################################
# Save Model with Problem Metadata
Expand Down

0 comments on commit cbc2382

Please sign in to comment.