Use model to calculate expected improvement for a list of candidates
See original GitHub issueThe idea would be to constrain Ax to only “suggest next experiments” from a predefined list of candidates using a model
output (e.g. from optimize()
) which was trained on data without that constraint. I would expect this to take the form of calculating the expected improvement for each of the candidates and returning the candidate with the maximum expected improvement. Does this seem feasible? How could I use model
to calculate expected improvement for an arbitrary candidate?
Issue Analytics
- State:
- Created 2 years ago
- Comments:27 (25 by maintainers)
Top Results From Across the Web
Bayesian Optimization - AWS
Bayesian Optimization employ a probabilistic model to optimize the fitness ... The value is calculated using expected improvement method.
Read more >How to Implement Bayesian Optimization from Scratch in Python
In this case, we will use the simpler Probability of Improvement method, which is calculated as the normal cumulative probability of the ...
Read more >A Conceptual Explanation of Bayesian Hyperparameter ...
The first shows an initial estimate of the surrogate model — in black with ... In this post, we will focus on TPE...
Read more >Acquisition functions — GPflowOpt 0.1.1 documentation
This acquisition function is the expectation of the improvement over the current best observation w.r.t. the predictive distribution. The definition is closely ...
Read more >botorch.acquisition - Bayesian Optimization in PyTorch
Computes classic Expected Improvement over the current best observed value, using the analytic formula for a Normal posterior distribution.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Building off of @Balandat’s suggestion, there is also a way to
evaluate_acquisition_function
directly on a point in experiment search space viaModelBridge.evaluate_acquisition_function
, which wrapsBoTorchModel.evaluate_acquisition_function
! The model will need to beModels.BOTORCH_MODULAR
to make use of this, but it should work. ForAxClient
, you’d do this like so, @sgbaird:I think it’s actually unnecessary for
evaluate_acquisition_function
to requiresearch_space_digest
andobjective_weights
, we’ll fix that (so you don’t need to manually construct them and so doing this is a lot simpler), but in the meantime this code block should work!I don’t believe the Service API currently support this. We do have a
get_model_predictions
interface, but interesting that function errors out if called before candidates have been generated: https://github.com/facebook/Ax/blob/main/ax/service/ax_client.py#L931-L950I guess it wouldn’t be hard to just fit the model instead of erroring out, that way if one calls
get_model_predictions
on one of the existing arms the model would be fitted on the backend (necessary and less costly than generating a candidate) and one could compute the acquisition function after.It seems that currently
get_model_predictions
also only works on existing arms; we should probably add and API for predicting “out-of-sample” configurations. cc @lena-kashtelyan