question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

test on optuna btb and smac

See original GitHub issue

test description

find the minimum value of (x1-1)**2 + (x2-2)**2

x1 is a float in [-10 ,10] x2 is a float in [-10 ,10]

test 100 studies and each study has 100 trials

test code

optuna

import optuna
import time
import numpy as np
optuna.logging.set_verbosity(optuna.logging.ERROR)
start = time.time()
def objective(trial):
    x1 = trial.suggest_uniform('x1', -10, 10)
    x2 = trial.suggest_uniform('x2', -10, 10)
    return  (x1 - 1) ** 2 + (x2 - 2) ** 2

best_score=[]
for i in range(100):
    study = optuna.create_study()
    study.optimize(objective, n_trials=100)
    best_score.append(study.best_value)

end = time.time()
print(np.mean(best_score))
print(end-start)

time 647s average of 100 results 0.27

btb

from btb import HyperParameter, ParamTypes
from btb.tuning import GP
import time
import numpy as np
import warnings
warnings.filterwarnings('ignore')

best_score = []

start = time.time()

for i in range(100):
    tunables = [
        ('x1', HyperParameter(ParamTypes.FLOAT, 
                                        [-10, 10])),
        ('x2', HyperParameter(ParamTypes.FLOAT, 
                                        [-10, 10]))
    ]
    tuner = GP(tunables)

    for i in range(100):
        params = tuner.propose()
        #  print('params',params)
        score = -(params['x1'] - 1) ** 2 -  (params['x2'] - 2) ** 2
        tuner.add(params, score)
    best_score.append(tuner._best_score)
    
end = time.time()
print(np.mean(best_score))
print(end-start)

time 264s average of 100 results 0.002

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
nomuramasahir0commented, May 31, 2019

Regarding the difference between the results of Optuna and btb, as mentioned above, I think that it is due to the difference of surrogate models (Optuna uses TPE and btb uses Gaussian Process). Eggensperger et al. confirmed that Gaussian process based Bayesian optimization performs better than TPE and SMAC for low-dimensional functions [Eggensperger 13].

And for the results of SMAC, it looks like the initial value of SMAC is given a very good value (the initial value of SMAC is (0, 0) that is near the optimal value (1, 2)). If you experiment with the different setting, the result may change.

[Eggensperger 13] Eggensperger, K., Feurer, M., Hutter, F., Bergstra, J., Snoek, J., Hoos, H., and Leyton-Brown, K.: Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters, in NeurIPS workshop on Bayesian Optimization in Theory and Practice (2013).

2reactions
luxu1220commented, May 31, 2019

I am a newcomer to this aspect. Thank you for your patient. You are right, I realized that the time is indeed not a problem. I also made a test on SMAC, this is just one study. You can see that it’s much slower than optuna and btb. 10 trials cost about 55s. But the result is pretty good.

image

After all these tests, I found that optuna is much easier for beginners to use. And it also supports saving/resuming studies. I hope optuna can support more algorithms. Thank you.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Optuna Guide: How to Monitor Hyper-Parameter Optimization ...
When you select a candidate model, you make sure that it generalizes to your test data in the best way possible.
Read more >
Hunting for the Optimal AutoML Library - Dataiku blog
The authors selected to test RoBo for SMBO with Gaussian Processes, SMAC for SMBO with Random Forests and Hyperopt for SMBO with Tree...
Read more >
A Comparative study of Hyper-Parameter Optimization Tools
In this paper, we compare the hyper-parameter optimiza- tion techniques based on Bayesian optimization (Optuna [3],. HyperOpt [4]) and SMAC [6], ...
Read more >
Optuna - A hyperparameter optimization framework
Optuna Dashboard is a real-time web dashboard for Optuna. You can check the optimization history, hyperparameter importances, etc. in graphs and tables. %...
Read more >
Hands-On Python Guide to Optuna - A New Hyperparameter ...
The objective function should contain the machine learning logic i.e., fit the model on data(iris dataset), predict the test data and return the ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found