Gradients for Acquisition are wrong when `normalizer=True`
See original GitHub issueHi,
It appears that the gradients of the acquisition functions are wrong when normalizer=True
is used in the model definition. This is because model.predictive_gradients
in GPy (which is called by model.get_prediction_gradients
in Emukit) does not account for normalization. I raised this issue here and made a pull request to fix it.
I don’t think Emukit enforces or recommends to use normalizer=False
anywhere. This is problematic because it is up to the user to define their own model “upstream” of the optimization loop. I suspect that many people are tempted to use normalizer=True
without knowing that the gradients of their acquisition function will be wrong.
If the pull request I made is accepted, then there is nothing to do except to tell people that they should use the latest (devel?) version of GPy.
If I am missing anything, please let me know.
Thanks, Antoine
import numpy as np
import GPy
from GPy.models import GradientChecker
from emukit.model_wrappers.gpy_model_wrappers import GPyModelWrapper
from emukit.bayesian_optimization.acquisitions import ProbabilityOfImprovement
M, Q = 15, 3
X = np.random.rand(M,Q)
Y = np.random.rand(M,1)
x = np.random.rand(1, Q)
model = GPy.models.GPRegression(X=X, Y=Y, normalizer=True)
emukit_model = GPyModelWrapper(model)
acq = ProbabilityOfImprovement(emukit_model)
g = GradientChecker(lambda x: acq.evaluate_with_gradients(x)[0],
lambda x: acq.evaluate_with_gradients(x)[1],
x, 'x')
assert(g.checkgrad())
Issue Analytics
- State:
- Created 4 years ago
- Comments:8
Top GitHub Comments
Hi Antoine,
Thank you for advising us and for the fix !
FYI: #806 was merged to GPy’s devel.