question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multi-fidelity optimization with KG and service API

See original GitHub issue

Hi, First off, thanks for the great work! I’ve been trying to run multi-fidelity optimization using the one-shot-KG method from botorch with the service API. In principle, everything seems to run fine. However, I noticed that the optimizer keeps running in the lowest fidelity and does not explore in this parameter. To see I’ve this is an issue with my problem I tried to reproduce the MFKG example from the botorch documentation where the fidelity is clearly changed during the optimization, but I observe the same behaviour. This is the code I ran to reproduce the botorch example.

from ax.service.ax_client import AxClient
from botorch.test_functions.multi_fidelity import AugmentedHartmann
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.registry import Models
import torch

problem = AugmentedHartmann(negate=True)
def objective(parameters):
    # x7 is the fidelity
    x = torch.tensor([parameters.get(f"x{i+1}") for i in range(7)])
    return {'f':(problem(x),0.0)}

gs = GenerationStrategy(
steps=[GenerationStep(model=Models.SOBOL,num_trials = 16),
GenerationStep(model=Models.GPKG,num_trials=-1),
])


ax_client = AxClient(generation_strategy=gs)
ax_client.create_experiment(
    name="hartmann_mf_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x7",
            "type": "range",
            "bounds": [0.0, 1.0],
            "is_fidelity": True,
            "target_value": 1.
        },
    ],
    objective_name="f",
)
# Initial sobol samples
for i in range(16):
    parameters, trial_index = ax_client.get_next_trial()
    ax_client.complete_trial(trial_index=trial_index, raw_data=objective(parameters))

# KGBO
for i in range(6):
    q_p, q_t = [], []
    # Simulate batches
    for q in range(4):
        parameters, trial_index = ax_client.get_next_trial()
        q_p.append(parameters)
        q_t.append(trial_index)
    for q in range(4):
        pi = q_p[q]
        ti = q_t[q]
        ax_client.complete_trial(trial_index=ti, raw_data=objective(pi))

Unknown-2

After the initial samples the fidelity stays at 0 except for two trial where it is very close to zero.

Are there any parameters that need to be specified for the acquisition function or is MFKG not yet supported with the service API?

Thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:7 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
sgbairdcommented, Apr 10, 2022

@lena-kashtelyan Any thoughts on a multi-fidelity Ax tutorial? (e.g. based on what’s in this thread). Not sure if it’s worth it compared to other priorities, but figured I’d float the idea. Not a huge deal, just a drive-by comment.

1reaction
soerenjalascommented, Jan 21, 2021

Hi, to follow up on this: I think I managed to get it working more like I would expect. The behaviour seems to be related to the cost_intercept value of the KG object:

GenerationStep(model=Models.GPKG,num_trials=-1,
               model_kwargs={'cost_intercept':5}, model_gen_kwargs={"num_fantasies":128})

After setting the parameters to match the named botorch example, the fidelity is better explored. Unknown-4

Read more comments on GitHub >

github_iconTop Results From Across the Web

Adding constraints to Multi-fidelity BO with discrete fidielities
Do you think it might not be feasible to optimize with KG over this constraint? If so, is there any better way to...
Read more >
Using a custom BoTorch model in Ax
Using the custom model in Ax to optimize the Branin function¶. We will demonstrate this with both the Service API (simpler, easier to...
Read more >
sMF-BO-2CoGP: A sequential multi-fidelity constrained ...
Bayesian optimization (BO) is an effective surrogate-based method that has been widely used to optimize simulation- based applications.
Read more >
YAHPO Gym - An Efficient Multi-Objective Multi-Fidelity ...
Our new surrogate-based benchmark collection consists of 14 scenarios that in total constitute over 700 multi-fidelity hyper- parameter optimization problems, ...
Read more >
Effectively using multifidelity optimization for wind turbine design
The first point of concern is that none of the multi-fidelity optimization solutions matched the high-fidelity solution.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found