question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Restrict RangeParameter to a certain stepsize (or grid)

See original GitHub issue

For optimizing experiments in the lab I’d want to constrain my range parameters to have a certain stepsize (i.e. effectively do a grid optimization), since we know from experience that some experimental parameters need to be varied more to have an effect on whatever it is we’re measuring. Measurements can also take a long time, so limiting the points we can optimize over also helps in that regard. After the coarser grid optimization we might want to zero in on a region of interest and do a finer grid optimization there.

I realize that restricting myself to a grid for optimization is kind of defeating the purpose of letting the model learn that some parameters aren’t very sensitive, but I couldn’t find another way to a-priori put this in (using the service API).

Before continuing with how I currently achieved this, my question is basically: Is there some more efficient method of restricting the search space to a grid of points with a RangeParameter, or am I limited to either maximizing the acquisition function over a grid of points or using a ChoiceParameter? The ChoiceParameter option doesn’t give access to get_contour_plot or other plot methods. I can get prediction for the trained model, but these predictions (even for already measured points) don’t match the measured values.

My current procedure

First I initialize the model with some pre-calculated points,

From #771 I was able figure out how to do this by generating the grid, and then running in a loop:

  • evaluating the acquisition function over the grid
  • finding the optimal grid point from the acquisition function evaluation
  • do the measurement for this grid point
  • attach the new data to the experiment
import copy
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.registry import Models
from ax.core.observation import ObservationFeatures
from botorch.models.gp_regression import SingleTaskGP
from ax.utils.measurement.synthetic_functions import branin
from ax.models.torch.botorch_modular.surrogate import Surrogate
from botorch.acquisition.monte_carlo import qNoisyExpectedImprovement
from ax.modelbridge.generation_strategy import GenerationStrategy, GenerationStep

def evaluate(parameters):
    x = np.array(list(parameters.values()))
    return {"branin": (branin(x), None), "l2norm": (np.sqrt((x ** 2).sum()), None)}

gs = GenerationStrategy(
    steps=[
        GenerationStep(  # BayesOpt step
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(SingleTaskGP),
                "botorch_acqf_class": qNoisyExpectedImprovement,
            },
        ),
    ]
)

ax_client = AxClient(generation_strategy=gs, verbose_logging = False)

ax_client.create_experiment(
    name="branin_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [-5., 10.0],
            "value_type": "float",
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0., 15.],
            "value_type": "float",
        }
    ],
    objective_name="branin",
    minimize=True, 
    tracking_metric_names = ['l2norm'],
)

# seeding model with initial coarse grid search
nscan = 5
params_initial = [
    np.linspace(p.lower, p.upper, nscan) 
    for p in ax_client.experiment.search_space.parameters.values()
]
params_initial = np.array(np.meshgrid(*params_initial)).T.reshape(-1, len(params_initial))

n_train = params_initial.shape[0]
for i in range(n_train):
    p = dict(zip(ax_client.experiment.parameters.keys(), params_initial[i]))
    ax_client.attach_trial(p)
    ax_client.complete_trial(trial_index=i, raw_data=evaluate(p))

# fitting the model, no access yet to fit model interface so using 
# trial failure method
_, trial_index = ax_client.get_next_trial()
ax_client.log_trial_failure(trial_index)

# generating the optimization grid
nscan = 15
values = [
    np.linspace(p.lower, p.upper, nscan) 
    for p in ax_client.experiment.search_space.parameters.values()
]
params = np.array(np.meshgrid(*values)).T.reshape(-1, len(values))

observation_features = [
    ObservationFeatures(dict(zip(ax_client.experiment.parameters.keys(), val)))
    for val in params
]

for _ in range(25):
    model_bridge = copy.deepcopy(ax_client.generation_strategy.model)

    acqf_values = [model_bridge.evaluate_acquisition_function(
            observation_features=[obs_feat],
            search_space = copy.deepcopy(ax_client.experiment.search_space)
        ) for obs_feat in observation_features]

    idx_max = np.argmax(acqf_values)
    p = observation_features[idx_max].parameters

    trial_params = ax_client.get_trials_data_frame()[['x1', 'x2']].values.tolist()
    # stopping if the suggested optimum was already checked
    if list(p.values()) in trial_params:
        opt = model_bridge.predict([observation_features[np.argmax(acqf_values)]])[0]['branin'][0]
        print('already checked the new optimum', p, opt)
        break

    ax_client.attach_trial(p)
    ax_client.complete_trial(trial_index=ax_client.experiment.num_trials-1, raw_data=evaluate(p))
    # fit model with trial failure method
    _, trial_index = ax_client.get_next_trial()
    ax_client.log_trial_failure(trial_index)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:11 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
dme65commented, Apr 4, 2022

if you did want to search over a continuous space with a certain step size, I’d recommend using an integer-valued parameter in Ax, and then applying the appropriate multiplier / intercept in your function evaluation code. If you use the Choice data, Ax won’t be able to do the modeling as effectively

These two options are equivalent with the standard Ax transforms. Choice parameters are transformed using an OrderChoiceEncode if they are specified as ordered and this transform maps the specified choices to 0, 1, …, k. In the case of evenly spaced integers like {0, 4, 8, 12, 16} the options will just be transformed to {0, 1, 2, 3, 4}.

1reaction
eytancommented, Apr 3, 2022

Just for posterity: if you did want to search over a continuous space with a certain step size, I’d recommend using an integer-valued parameter in Ax, and then applying the appropriate multiplier / intercept in your function evaluation code. If you use the Choice data, Ax won’t be able to do the modeling as effectively. We also have ways of accounting for the rounding in the acquisition function which should work pretty well, and will work even better in the future.

@lena-kashtelyan maybe we can have a wishlist item for to adding step size for integer parameters. In the past I was not such a fan of this idea but it’s useful to keep tabs on user demand 😃 cc @sdaulton.

Read more comments on GitHub >

github_iconTop Results From Across the Web

grid-auto-columns - CSS: Cascading Style Sheets | MDN
The grid-auto-columns CSS property specifies the size of an implicitly-created grid column track or pattern of tracks.
Read more >
Configure Time Scope Block - MATLAB & Simulink
Time Scope uses the Time span and Time display offset parameters to determine the time range. To change the signal display settings, select...
Read more >
13 Grid Search
There are several options for creating non-regular grids. The first is to use random sampling across the range of parameters. The grid_random() function ......
Read more >
Create Parameters - Tableau
Range - The parameter control lets you select values within a specified range. ... If you select Range, you must specify a minimum,...
Read more >
Parameter and Parameters - lmfit
expr (str, optional) – Mathematical expression used to constrain the value during the fit (default is None). brute_step (float, optional) – Step size...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found