question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

how to force models to make more exploration

See original GitHub issue

Hi,

I would like to thank everyone who contributed to this great library. It enables easy use of Bayesian optimization to solve problems with the state of the art algorithms.

I have implemented Ax for my single objective design optimization study. Here is the code snippet: ` from ax.service.ax_client import AxClient from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy from ax.modelbridge.registry import Models

def objective_function(x): # region of f calculation # gives ‘ErrorDesign’ in case of error, otherwise float. return {“f”: (f, 0.0)}

gs = GenerationStrategy( steps=[GenerationStep(model=Models.SOBOL,num_trials =20), GenerationStep(model=Models.GPMES,num_trials=-1), ])

ax_client = AxClient(generation_strategy=gs) ax_client.create_experiment( name=“single_objective_design”, parameters=[
{“name”: “x1”, “type”: “range”,“bounds”: [0.2, 1.0],“value_type”: “float”}, {“name”: “x2”, “type”: “range”,“bounds”: [2.0, 6.0],“value_type”: “float”}, {“name”: “x3”, “type”: “range”,“bounds”: [0.2, 1.0],“value_type”: “float”}, {“name”: “x4”, “type”: “range”,“bounds”: [1.7, 8.7],“value_type”: “float”}, {“name”: “x5”, “type”: “range”,“bounds”: [ 0, 25],“value_type”: “int”}, {“name”: “x6”," type": “range”,“bounds”: [4.0,12.0],“value_type”: “float”}, {“name”: “x7”, “type”: “range”,“bounds”: [2.0, 5.0],“value_type”: “float”}, {“name”: “x8”, “type”: “range”,“bounds”: [0.2, 1.0],“value_type”: “float”}, {“name”: “x9”, “type”: “range”,“bounds”: [80., 95.],“value_type”: “float”}, {“name”: “x10”,“type”: “range”,“bounds”: [ 0, 25],“value_type”: “int”}, {“name”: “x11”,“type”: “choice”,“values”:[“4”,“8”,“12”,“16”],“value_type”: “str”}, {“name”: “x12”,“type”: “choice”,“values”:[“4”,“8”,“12”,“16”],“value_type”: “str”},
], objective_name=“f”, minimize=True)

for _ in range(200): trial_params, trial_index = ax_client.get_next_trial() data = objective_function(trial_params) if data[“f”][0] == ‘ErrorDesign’: ax_client.log_trial_failure(trial_index=trial_index) else: ax_client.complete_trial(trial_index=trial_index, raw_data=data[“f”]) `

I have 12 design parameters (10 ranges, 2 choices) to be optimized and benefit service API with generation strategies ([sobol + gpmes, sobol + gpei, sobol + botorch, sobol + gpkg]) as seen in the code snippet. I am using python3.8 and the latest versions of botorch, gpytorch, and torch libraries.

Below is the history plot showing objective values with respect to iteration number for different models after running code respectively. I have also added the history of design parameters for the GPEI Model.

objective design_parameters

My question is about the non-explorative search behavior of the models after 20 sobol iterations. As you see from the objective history figure, successive designs have close objective values. Indeed, I would expect the code to do more exploration since the search space is quite large, but each model quickly converge some local minimum and continue to search around that minimum. By the way, the global minimum of the objective function is around -3.6.

I have tried the followings, but code behavior is not much affected:

  • Repeated runs with different sobol initializations
  • Increasing sobol trial number
  • Increasing num_fantasies, num_mv_samples, num_y_samples, candidate_size

Any help to force these generation strategies into making more exploration would be appreciated. Thanks in advance.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:14 (5 by maintainers)

github_iconTop GitHub Comments

3reactions
ertandemiralcommented, Nov 22, 2021

Hi @samueljamesbell ,

I have updated recently my setup after the addition of BOTORCH_MODULAR feature to ax. I have added the new setup below and recommend using it:

from ax.modelbridge import get_sobol
import torch
from botorch.acquisition.active_learning import qNegIntegratedPosteriorVariance
from ax import ParameterType, RangeParameter, SearchSpace
from botorch.models.gp_regression import SingleTaskGP
from ax.models.torch.botorch_modular.surrogate import Surrogate
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models
from ax.service.ax_client import AxClient

def objective_function(x):
    f = x["x1"]**2 + x["x2"]**2 + x["x3"]**2
    return  {"f": (f, None)}

search_space = SearchSpace(parameters = [RangeParameter(name ="x1", lower = 0.0, upper = 1.0, parameter_type = ParameterType.FLOAT),
                                         RangeParameter(name ="x2", lower = 0.0, upper = 1.0, parameter_type = ParameterType.FLOAT),
                                         RangeParameter(name ="x3", lower = 0.0, upper = 1.0, parameter_type = ParameterType.FLOAT),
                                         ])
sobol = get_sobol(search_space)
mc_points = sobol.gen(1024).param_df.values
mcp = torch.tensor(mc_points)

model_kwargs_val = {"surrogate" : Surrogate(SingleTaskGP),
              "botorch_acqf_class" : qNegIntegratedPosteriorVariance,
              "acquisition_options" : {"mc_points" : mcp}}

gs = GenerationStrategy(steps = [GenerationStep(model = Models.SOBOL,           num_trials = 5),
                                 GenerationStep(model = Models.BOTORCH_MODULAR, num_trials = 15, model_kwargs = model_kwargs_val)])

ax_client = AxClient(generation_strategy = gs)
ax_client.create_experiment(
    name = "active_learning_experiment",
    parameters = [      
        {"name": "x1", "type": "range","bounds": [0.0, 1.0],"value_type": "float"},
        {"name": "x2", "type": "range","bounds": [0.0, 1.0],"value_type": "float"},
        {"name": "x3", "type": "range","bounds": [0.0, 1.0],"value_type": "float"}, 
    ],
    objective_name = "f",
    minimize = True)

for _ in range(20):
    trial_params, trial_index = ax_client.get_next_trial()
    data = objective_function(trial_params)
    ax_client.complete_trial(trial_index = trial_index, raw_data = data["f"])

To make run above code, input constructor for acquisition class qNegIntegratedPosteriorVariance should be registered in \botorch\acquisiton\input_constructer.py file of botorch library. So, the below code should also be appended in the corresponding file:

@acqf_input_constructor(qNegIntegratedPosteriorVariance)
def construct_inputs_qNIPV(
    model: Model,
    mc_points: Tensor,
    training_data: TrainingData,
    objective: Optional[ScalarizedObjective] = None,
    X_pending: Optional[Tensor] = None,
    sampler: Optional[MCSampler] = None,
    **kwargs: Any,
) -> Dict[str, Any]:
    
    if model.num_outputs == 1 : objective = None
    
    base_inputs = construct_inputs_mc_base(
        model=model,
        training_data=training_data,
        sampler=sampler,
        X_pending=X_pending,
        objective = objective,
    )

    return {**base_inputs, "mc_points": mc_points}

Dimension problem may occur due to the objective variable being considered as multi-output in NegIntegratedPosteriorVariance class if it is not equal to None. In above registration code, it is set to None for single output case. And, mc_points can be given in N x D format for this setup. I hope it helps and developers may correct me if anything is wrong.

2reactions
Balandatcommented, Jan 12, 2021

Yeah qNIPV is agnostic to the direction, the goal is to minimize a global measure of uncertainty of the model, so there is no better or worse w.r.t. the function values.

Read more comments on GitHub >

github_iconTop Results From Across the Web

A practical guide to selecting models for exploration, inference ...
We want to share what we have learned from our struggles, and make model selection easier and more effective for others. We focus...
Read more >
Exploration Strategies in Deep Reinforcement Learning | Lil'Log
One common approach to better exploration, especially for solving the hard-exploration problem, is to augment the environment reward with an ...
Read more >
Automated generation of consistent models using qualitative ...
Additionally, we introduce custom exploration strategies to speed up model generation. We evaluate the scalability and diversity of our approach ...
Read more >
Model Exploration Whirlpool - Domain-Driven Design
The primary goals are to collect reference scenarios, capture bits of the model and then leave most ideas behind. The starting point for...
Read more >
Improving engineering models of terramechanics for planetary ...
Thus, a lot of emphasis is focused onto the wheel-soil mechanics ... On the other hand, the progress on explicit force models have...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found