question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[QUESTION] Get the best predicted parameter (not observed)

See original GitHub issue

Hi guys!

Playing with ax for a while, and I never found a “native” way to get a predicted optimal set of parameters.

For exemple : If I take a dummy function like $x^2+1$ and want to minimize it, I expect the optimal parameter to be $x=0$.

Optim_Search

Using the AxClient API, I’m trying to recover the best parameter using ax_client.get_best_parameters(). But this returns the best observed data from completed trials. So here I get the black left point near 0…

Is-it possible to have something predicting the global optimum using the underlying model? Mean, a prediction of $x=0$ in my case?

If you want to play with this dataset, I give you a snapshot here

Thanks for your help!

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:9 (6 by maintainers)

github_iconTop GitHub Comments

2reactions
dme65commented, Sep 15, 2022

Hi @jultou-raa,

Regarding your three questions:

  1. Yes, what you are doing looks correct to me.
  2. PosteriorMean assumes one outcome, but q=1 here corresponds to how many candidates you want to evaluate. That is, you won’t be able to use PosteriorMean if you want to generate more than 1 candidate and evaluate those in parallel. That doesn’t seem to be something you are interested in though given your description above.
  3. I’m not sure if I understand what you mean here. Can you add some more details?

Taking a step back, every acquisition function you consider here generates a candidate close to zero and I wouldn’t read too much into the fact that 1.16e−3 is slightly closer to zero than 2.20e−3. Given the situation you describe, EI is probably a natural choice here as it aims to maximize the expected improvement given one more function evaluation.

While it may feel like PosteriorMean is a natural choice when you have one evaluation left and want to focus on exploitation, here is a scenario where it will probably do the wrong thing: Assume your current best function value is f* and that the best posterior mean according to the model is also f*. Assume in addition that the uncertainty according to the model is 0 (the model is very confident in its prediction). Now, assume there is a second point with posterior mean f* + epsilon with epsilon>0 very close to zero, but that this point has very high uncertainty according to the model (the model is very unsure about its prediction). If you use PosteriorMean, it will ignore the model uncertainty and pick the point with posterior mean f*, which isn’t a great choice since this point has no upside whatsoever. On the other hand, EI will end up picking the point with posterior mean f* + epsilon since this point has higher upside and may actually give you a sizable improvement compared to your current best point.

1reaction
bernardbeckermancommented, Aug 2, 2022

@sgbaird thanks for this response, and @jultou-raa sorry for the late reply! I agree with everything @sgbaird said, and also want to ask a bit more about your use case, particularly why you’re looking for the modeled optimum rather than the optimum found so far. In the case that your goal is to do one final sample of the modeled optimum so as to get the best final result, I think this might not be the best strategy, since expected improvement is by definition the one-step optimal strategy for this purpose. Does that make sense? Also let me know if @sgbaird’s solution works for you.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Finding the regression equation and best predicted value for ...
In this video, Professor Curtis uses StatCrunch to demonstrate how to find the regression equation and best predicted value for earthquake ...
Read more >
Finding the regression equation and best predicted ... - YouTube
In this video, Professor Curtis uses StatCrunch to demonstrate how to find the regression equation and best predicted value for bear chest ...
Read more >
4. Regression and Prediction - Practical Statistics for Data ...
Simple linear regression tries to find the “best” line to predict the response PEFR as a function of the predictor variable Exposure ....
Read more >
Making Predictions with Regression Analysis - Statistics By Jim
Learn how to use regression analysis to make predictions and determine whether they are both unbiased and precise.
Read more >
12.3 - Simple Linear Regression - STAT ONLINE
If the value of the slope is anything other than 0, then the predict value of y will be different for all values...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found