question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Noisy objective function not taken into account in `SimpleExperiment` when suggesting best parameters

See original GitHub issue

I’ve been doing ax hyperparameter optimisation for a DNN doing regression on images like this:

    exp = SimpleExperiment(
        name=EXPERIMENT_NAME,
        search_space=dnn_search_space,
        evaluation_function=train_cross,
        objective_name="regression_error",
        minimize=True,
    )

    # sobol sample search space
    for i in range(20):
        exp.new_trial(generator_run=sobol.gen(1))

    # converge on best hyperparams
    best_arm = None
    for i in range(50):
        gpei = Models.GPEI(experiment=exp, data=exp.eval())
        generator_run = gpei.gen(1)
        best_arm, _ = generator_run.best_arm_predictions
        exp.new_trial(generator_run=generator_run)
        best_parameters = best_arm.parameters
        print(str(i) + " best params " + str(best_parameters))

and I have found that the “best parameters” recommended by ax tend to not change very much. This suggests that ax is giving me the hyperparameters that were evaluated and found to give the best result.

The problem with this is that the best results tend to be flukes - the training process is of course noisy and non-determinate and things that make the process more stochastic, such as very high learning rates and small batch sizes, tend to give more varied results. The more varied results will happen to include the best and worst results and on average, be worse than smoother more stable parameter sets. But Ax seems to just take the best result it finds and recommend this.

Is there some way of using Ax in which it will assume a noisy underlying object function and recommend the best hyperparameters based on an interpolation which uses all of the information available to it. rather than just, which trial scored best, one time?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
stevemandalacommented, Feb 16, 2021

Hey @LukeAI, thanks for raising this. I believe this was caused by a bug in simple experiment assuming 0.0 SEM. We recently pushed a fix on master, which should ensure we don’t default to noise-less modeling when SEM isn’t provided: https://github.com/facebook/Ax/commit/f6ccdd7be3cfff835fe2b75feaad6a800a1378ea

1reaction
lena-kashtelyancommented, Feb 24, 2021

This should now be fixed in latest stable release, 0.1.20.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How could I get the best hyperparameter when there is high ...
In principle, I don't think it is possible to reduce the noise on the objective function value of each hyperparameter without adding more ......
Read more >
Golem: an algorithm for robust experiment and process ... - NCBI
2 shows a simple, one-dimensional example to provide intuition for Golem's behavior. In the top panel, the robust objective function is shown for...
Read more >
Using Response Surfaces and Expected Improvement to ...
1) Select the parameters which result in the maximum expected improvement. Subtract the variance of any process noise from the response surface error...
Read more >
Why does Bayesian Optimization perform poorly in more than ...
To be completely honest, it's because everything performs poorly in more than 20 dimensions. Bayesian optimization isn't special here.
Read more >
Constrained Bayesian Optimization with Noisy Experiments
Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found