You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for your amazing work!
I am wondering why scale neural network output by standard deviation and flip sign in VP score function, and NOT in VE score function?
Thanks a lot!
defscore_fn(x, t):
# Scale neural network output by standard deviation and flip signifcontinuousorisinstance(sde, sde_lib.subVPSDE):
# For VP-trained models, t=0 corresponds to the lowest noise level# The maximum value of time embedding is assumed to 999 for# continuously-trained models.labels=t*999score=model_fn(x, labels)
std=sde.marginal_prob(torch.zeros_like(x), t)[1]
else:
# For VP-trained models, t=0 corresponds to the lowest noise levellabels=t* (sde.N-1)
score=model_fn(x, labels)
std=sde.sqrt_1m_alphas_cumprod.to(labels.device)[labels.long()]
########################################score=-score/std[:, None, None, None]
###########################################returnscore
Hello, thanks for your amazing work!
I am wondering why scale neural network output by
standard deviation and flip sign
in VP score function, and NOT in VE score function?Thanks a lot!
Implementation of VP :
Implementation of VE:
The text was updated successfully, but these errors were encountered: