[Bug] Input normalization makes GP fail to fit data #2342
Replies: 1 comment
-
Not a BoTorch dev, but from experience, Ackley is very tricky to fit. TL;DR: Most likely not a bug, but the GP hyperparameter prior's interaction with input/output scaling. Long reply: If you don't normalize the input from[-2, 2] to [0, 1], you effectively suggest for the variation in the objective to be 4 times as rapid under the lengthscale prior. This in turn makes it more likely that the GP infers the variation in the function as opposed to interpreting it as noise. If you don't normalize the output, a similar thing happens. There is more signal, so you are less likely to fit a high-noise GP. My guess would be that the second plot (the "poor" fit) results in better optimization, since the regularization we get from the noise tends to aid optimization in the case of Ackley. If you removed the prior on the hyperparameters, these examples_should all give you the same fit. |
Beta Was this translation helpful? Give feedback.
-
🐛 Bug of input normalization
I tried to fit GP on Ackley function with 200 points. The GP fit very well without input normalization, but after adding input normalization, the GP posterior surface became smoother, resulting in a convex surface.
Vanilla GP
GP + Input normalization
GP + Output standardization
GP + Input normalization + Output standardization
To reproduce
** Code snippet to reproduce **
System information
Please complete the following information:
Beta Was this translation helpful? Give feedback.
All reactions