question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Use model to calculate expected improvement for a list of candidates

See original GitHub issue

The idea would be to constrain Ax to only “suggest next experiments” from a predefined list of candidates using a model output (e.g. from optimize()) which was trained on data without that constraint. I would expect this to take the form of calculating the expected improvement for each of the candidates and returning the candidate with the maximum expected improvement. Does this seem feasible? How could I use model to calculate expected improvement for an arbitrary candidate?

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:27 (25 by maintainers)

github_iconTop GitHub Comments

3reactions
lena-kashtelyancommented, Jan 6, 2022

Building off of @Balandat’s suggestion, there is also a way to evaluate_acquisition_function directly on a point in experiment search space via ModelBridge.evaluate_acquisition_function, which wraps BoTorchModel.evaluate_acquisition_function! The model will need to be Models.BOTORCH_MODULAR to make use of this, but it should work. For AxClient, you’d do this like so, @sgbaird:

from ax.core.arm import Arm
from ax.core.observation import ObservationFeatures
from ax.modelbridge.modelbridge_utils import extract_search_space_digest

model_bridge = ax_client.generation_strategy.model
transformed_gen_args = model_bridge._get_transformed_gen_args(
    search_space=ax_client.experiment.search_space,
)
search_space_digest = extract_search_space_digest(
   search_space=transformed_gen_args.search_space, 
   param_names=model_bridge.parameters
)
objective_weights = extract_objective_weights(
   objective=ax_client.experiment.optimization_config.objective, 
   outcomes=model_bridge.outcomes
)

# `acqf_values` is a list of floats (since we can evaluate acqf for multiple points at once); 
# ordering corresponds to order of points in `observation_features` input
acqf_values = model_bridge.evaluate_acquisition_function(
     # Each `ObservationFeatures` below represents one point in experiment (untransformed) search space:
    observation_features=[ObservationFeatures.from_arm(Arm(parameters={"x": ..., ...})), ...],
    search_space_digest=search_space_digest,
    objective_weights=objective_weights,
)

I think it’s actually unnecessary for evaluate_acquisition_function to require search_space_digest and objective_weights, we’ll fix that (so you don’t need to manually construct them and so doing this is a lot simpler), but in the meantime this code block should work!

2reactions
Balandatcommented, Feb 12, 2022

given that one uses the Service Api, is there a simple way of triggering model fitting / updating (after having manually attached some trials) without actually requesting a new trial suggestion

I don’t believe the Service API currently support this. We do have a get_model_predictions interface, but interesting that function errors out if called before candidates have been generated: https://github.com/facebook/Ax/blob/main/ax/service/ax_client.py#L931-L950

I guess it wouldn’t be hard to just fit the model instead of erroring out, that way if one calls get_model_predictions on one of the existing arms the model would be fitted on the backend (necessary and less costly than generating a candidate) and one could compute the acquisition function after.

It seems that currently get_model_predictions also only works on existing arms; we should probably add and API for predicting “out-of-sample” configurations. cc @lena-kashtelyan

Read more comments on GitHub >

github_iconTop Results From Across the Web

Bayesian Optimization - AWS
Bayesian Optimization employ a probabilistic model to optimize the fitness ... The value is calculated using expected improvement method.
Read more >
How to Implement Bayesian Optimization from Scratch in Python
In this case, we will use the simpler Probability of Improvement method, which is calculated as the normal cumulative probability of the ...
Read more >
A Conceptual Explanation of Bayesian Hyperparameter ...
The first shows an initial estimate of the surrogate model — in black with ... In this post, we will focus on TPE...
Read more >
Acquisition functions — GPflowOpt 0.1.1 documentation
This acquisition function is the expectation of the improvement over the current best observation w.r.t. the predictive distribution. The definition is closely ...
Read more >
botorch.acquisition - Bayesian Optimization in PyTorch
Computes classic Expected Improvement over the current best observed value, using the analytic formula for a Normal posterior distribution.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found