-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hyperparameter optimisation #3
Comments
Hi @caponetto , May I know why in prior.py, degrees_of_freedom = data.shape[1] + 1? Is the scale factor can be understand as learning rate in this case? Thank you very much. |
Hi @caponetto , May I know do you have an idea to implement the hyperparameter optimisation? Thank you very much. |
Hi @GMCobraz, Unfortunately, I don't have details about the behavior of the hyperparameters. Also, notice that the example was taken from the original paper. Regarding the hyperparameter optimization itself, I suggest you take a look at Random Search [1]. [1] https://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf |
Dear caponetto and others,
Good day,
i am working on the hyperparameter optimisation.
I have some questions.
May I know, what is the g refers to in prior.py?
scatter_matrix = (data_matrix_cov / g).T
Thank you very much
The text was updated successfully, but these errors were encountered: