question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] How to configure model with custom settings/kwargs in generation strategy

See original GitHub issue

Thank you for your continuous support.

I have two follow-up questions about #278.

  1. What should I do to pass my factory functions to get_botorch? I understood that I can use BoTorch with Ax by setting factory functions (e.g. _get_and_fit_simple_custom_gp for custom GP model and get_scalarized_UCB for custom acquisition function) to get_botorch’s argument model_constructor and acqf_constructor respectively. @lena-kashtelyan told me I can use GenerationStep(model=get_botorch, ...). It is very simple but I don’t know how to pass my factory functions to BoTorch.

  2. Is there any way to confirm what kernel and acquisition function are used? I changed my GenerationStrategy to use Models.SOBOL and get_botorch. Then, the result of optimization was changed. But I didn’t know whether the kernel function and the acquisition function were changed because I didn’t pass my factory functions to get_botorch. So, can I confirm what kernel and acquisition function are used in optimization?

My current code is like below:

class RBFGP(SingleTaskGP, GPyTorchModel):
    _num_outputs = 2
    # __init__ and forward function are defined.

# factory function for GP model
def get_RBFGP(Xs, Ys, **kwargs):
    model = RBFGP(Xs[0], Ys[0])
    mll = ExactMarginalLogLikelihood(...)
    fit_gpytorch_model(mll)
    return model

# factory function for acquisition function PI
def get_qPI(
    model: Model,
    best_f: Union[float, Tensor],
    **kwargs: Any,
) -> AcquisitionFunction:
    return qProbabilityOfImprovement(model=model, best_f=best_f)

gs = GenerationStrategy(
    steps=[
        GenerationStep(model=Models.SOBOL, num_arms=5),
        GenerationStep(model=get_botorch, num_arms=-1),
    ]
)

# initialize client, set up experiment, and optimize

Maybe I have misunderstood, so any advice would be greatly appreciated. And, please ask me if you want to know the details of my code.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:15 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
rinc5commented, Apr 21, 2020

Thank you for your explanation. I could understand about the correlation. Assuming no correlation in the observation noise is OK in my case.

I also could understand the flow and inputs of SingleTaskGP. This is very helpful when understanding Ax and debugging. And finally, the factory function for kernel has worked well!

But, RuntimeError was occurred in the factory function for acquisition function.

Traceback (most recent call last):
  File ".\custom_botorch_model.py", line 156, in <module>
    next_candidates, _ = ax_client.get_next_trial()
  File "~\Python38\lib\site-packages\ax\service\ax_client.py", line 281, in get_next_trial
    trial = self.experiment.new_trial(generator_run=self._gen_new_generator_run())
  File "~\Python38\lib\site-packages\ax\service\ax_client.py", line 866, in _gen_new_generator_run
    return not_none(self.generation_strategy).gen(
  File "~\Python38\lib\site-packages\ax\modelbridge\generation_strategy.py", line 296, in gen
    generator_run = model.gen(
  File "~\Python38\lib\site-packages\ax\modelbridge\base.py", line 610, in gen
    observation_features, weights, best_obsf, gen_metadata = self._gen(
  File "~\Python38\lib\site-packages\ax\modelbridge\array.py", line 202, in _gen
    X, w, gen_metadata = self._model_gen(
  File "~\Python38\lib\site-packages\ax\modelbridge\torch.py", line 204, in _model_gen
    X, w, gen_metadata = self.model.gen(
  File "~\Python38\lib\site-packages\ax\models\torch\botorch.py", line 368, in gen
    candidates, expected_acquisition_value = self.acqf_optimizer(
  File "~\Python38\lib\site-packages\ax\models\torch\botorch_defaults.py", line 260, in scipy_optimizer
    X, expected_acquisition_value = optimize_acqf(
  File "~\Python38\lib\site-packages\botorch\optim\optimize.py", line 109, in optimize_acqf
    candidate, acq_value = optimize_acqf(
  File "~\Python38\lib\site-packages\botorch\optim\optimize.py", line 145, in optimize_acqf
    batch_initial_conditions = ic_gen(
  File "~\Python38\lib\site-packages\botorch\optim\initializers.py", line 105, in gen_batch_initial_conditions
    Y_rnd_curr = acq_function(
  File "~\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "~\Python38\lib\site-packages\botorch\utils\transforms.py", line 202, in decorated
    return method(cls, X)
  File "~\Python38\lib\site-packages\botorch\utils\transforms.py", line 173, in decorated
    return method(cls, X)
  File "~\Python38\lib\site-packages\botorch\acquisition\monte_carlo.py", line 322, in forward
    val = torch.sigmoid((max_obj - self.best_f) / self.tau).mean(dim=0)
RuntimeError: The size of tensor a (1000) must match the size of tensor b (2) at non-singleton dimension 1

I used this page as a reference that you told me before. In qProbabilityOfImprovement, we need to pass best_f. Is it correct way to calculate best_f?

# factory function for acquisition function (PI)
def get_qPI(
    model: Model,
    **kwargs: Any,
) -> AcquisitionFunction:
    obj_tf = get_objective_weights_transform(kwargs.get('objective_weights'))
    con_tfs = get_outcome_constraint_transforms(kwargs.get('outcome_constraints'))
    X_observed = kwargs.get('X_observed')
    inf_cost = get_infeasible_cost(X=X_observed, model=model, objective=obj_tf)
    objective = ConstrainedMCObjective(
        objective=obj_tf, constraints=con_tfs or [], infeasible_cost=inf_cost
    )
    return qProbabilityOfImprovement(model=model, best_f=best_f, objective=objective)

# best_f
best_f = torch.tensor(df[['Z2', 'Z1']].max().values)

# create generation strategy for using existing data
gs = GenerationStrategy(
    steps=[
        GenerationStep(
            model=Models.BOTORCH,
            num_trials=-1,
            model_kwargs={'model_constructor': get_RBFGP, 'acqf_constructor': get_qPI}
        ),
    ]
)

Of course, I’m glad to be more user friendly, but issues and your answers help me. In fact, I could make my experimental data optimization program by referring issues about black box optimization. Thank you for your all supports!

0reactions
rinc5commented, Apr 23, 2020

Thank you for your quick response.

So the best_f is always w.r.t. the objective you’re considering (which is a scalar); in this case it would be the max across your objective evaluated on all previous observations.

I understand that best_f depends on objective.

You’d expect a some difference b/c the model is different. Note also that there is stochasticity in the optimization (from the model fitting, as well as from the acquisition function optimization). So to compare performance you’d have to re-run the optimization loop a bunch of times and compare averages.

Of course, I must run the optimization loop many times when I compare these performances. I ran only 5 times. On the other hand almost result of Models.GPEI were within outcome constraint, the results of Matern + qNEI had never been within outcome constraint. I will run the optimization loop more time, but it takes a little long time (about 10 min a trial). So, I’ll report its results as another issue if there is a gap between the results of Models.GPEI and Matern + qNEI.

Models.GPEI is using qNoisyExpectedImprovement (qNEI). I guess that’s not 100% consistent in terminology, but we’ve found that in our applications where there is noise involved, qNEI often does a lot better than qEI. You can read up on the basics of NEI in this paper. The implementation in BoTorch is slightly different than reported there, you can find some basic info about this in sec 6.2 of this paper.

Thank you for teaching me the details of qNEI. A noise is involved in my case too. So, I’ll use qNEI.

I really appreciate your supports. With all your help, I finally changed kernel and acquisition function.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Generation - Hugging Face
Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter.
Read more >
Making new Layers and Models via subclassing - TensorFlow
We recommend creating such sublayers in the __init__() method and leave it to the first __call__() to trigger building their weights.
Read more >
Generation Strategy (GS) Tutorial
Generation strategy allows for storage and resumption of modeling setups, making optimization resumable from SQL or JSON snapshots. This tutorial walks through ...
Read more >
Tips and Tricks - Simple Transformers
Prepare the data and default model configuration; 4. Set up the training function; 5. Run the sweeps; 6. Putting it all together. Custom...
Read more >
Custom wagtail page model save method called twice
def save(self, *args, **kwargs): if kwargs.get('update_fields'): pass # save not called from publish # do_stuff_on_save() else: pass ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found