question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Outcome constraints not respected by suggested best point in Loop API (`optimize` function)

See original GitHub issue

Hello everybody,

I am experimenting with constrained optimization and implemented the following toy example:

from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
import time

X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
model_reg = RandomForestRegressor(n_estimators=100)

def eval(parameterization):
    start = time.time()
    model_reg.n_estimators = parameterization.get("n_estimators")
    model_reg.fit(X_train, y_train)
    training_time = time.time() - start
    r2_score = -1.0 * model_reg.score(X_test, y_test)
    print(str(model_reg.n_estimators) + ': time: ' + str(training_time) + ' score: ' + str(r2_score))
    return {"r2": (r2_score, 0.0), "training_time": (training_time, 0.0)}


from ax import *

best_parameters, values, experiment, model = optimize(
    parameters=[
        {
            "name": "n_estimators",
            "type": "range",
            "bounds": [1, 1000],
            "value_type": "int"
        }
    ],
    experiment_name="test",
    objective_name="r2",
    evaluation_function=eval,
    minimize=True,  
    outcome_constraints=["training_time <= 0.15"],  
    total_trials=30, 
)

print(best_parameters)

However, the configuration that is returned as best_parameters does not satisfy the defined outcome_constraints:

[INFO 05-03 16:46:12] ax.modelbridge.dispatch_utils: Using Bayesian Optimization generation strategy: GenerationStrategy(name='Sobol+GPEI', steps=[Sobol for 5 trials, GPEI for subsequent trials]). Iterations after 5 will take longer to generate due to  model-fitting.
[INFO 05-03 16:46:12] ax.service.managed_loop: Started full optimization with 30 steps.
[INFO 05-03 16:46:12] ax.service.managed_loop: Running optimization trial 1...
906: time: 1.7917001247406006 score: -0.8648693888365733
[INFO 05-03 16:46:14] ax.service.managed_loop: Running optimization trial 2...
720: time: 1.6528048515319824 score: -0.8663854171005674
[INFO 05-03 16:46:16] ax.service.managed_loop: Running optimization trial 3...
149: time: 0.25377392768859863 score: -0.8611771945201234
[INFO 05-03 16:46:16] ax.service.managed_loop: Running optimization trial 4...
727: time: 1.2497072219848633 score: -0.8651336671915457
[INFO 05-03 16:46:17] ax.service.managed_loop: Running optimization trial 5...
868: time: 1.5468764305114746 score: -0.8658227963640711
[INFO 05-03 16:46:19] ax.service.managed_loop: Running optimization trial 6...
790: time: 1.4445219039916992 score: -0.8652951254507412
[INFO 05-03 16:46:23] ax.service.managed_loop: Running optimization trial 7...
104: time: 0.2490558624267578 score: -0.8613842467598825
[INFO 05-03 16:46:24] ax.service.managed_loop: Running optimization trial 8...
225: time: 0.4697122573852539 score: -0.8690098170925007
[INFO 05-03 16:46:25] ax.service.managed_loop: Running optimization trial 9...
203: time: 0.4269378185272217 score: -0.8661680145885738
[INFO 05-03 16:46:28] ax.service.managed_loop: Running optimization trial 10...
259: time: 0.46560215950012207 score: -0.8641557301504206
[INFO 05-03 16:46:30] ax.service.managed_loop: Running optimization trial 11...
48: time: 0.11366415023803711 score: -0.8643144072383191
[INFO 05-03 16:46:32] ax.service.managed_loop: Running optimization trial 12...
24: time: 0.09262824058532715 score: -0.8586854880769732
[INFO 05-03 16:46:33] ax.service.managed_loop: Running optimization trial 13...
62: time: 0.1373279094696045 score: -0.8634329182788033
[INFO 05-03 16:46:35] ax.service.managed_loop: Running optimization trial 14...
319: time: 0.6133086681365967 score: -0.8658216345158011
[INFO 05-03 16:46:37] ax.service.managed_loop: Running optimization trial 15...
381: time: 0.7531170845031738 score: -0.8643854652791518
[INFO 05-03 16:46:40] ax.service.managed_loop: Running optimization trial 16...
446: time: 0.8942420482635498 score: -0.8631517947001044
[INFO 05-03 16:46:42] ax.service.managed_loop: Running optimization trial 17...
52: time: 0.17563152313232422 score: -0.8706700091253157
[INFO 05-03 16:46:44] ax.service.managed_loop: Running optimization trial 18...
50: time: 0.17366623878479004 score: -0.8567509836890203
[INFO 05-03 16:46:46] ax.service.managed_loop: Running optimization trial 19...
46: time: 0.12352871894836426 score: -0.8597523848388682
[INFO 05-03 16:46:47] ax.service.managed_loop: Running optimization trial 20...
53: time: 0.12775921821594238 score: -0.8609937877414613
[INFO 05-03 16:46:49] ax.service.managed_loop: Running optimization trial 21...
57: time: 0.1894826889038086 score: -0.8605349964341363
[INFO 05-03 16:46:51] ax.service.managed_loop: Running optimization trial 22...
55: time: 0.15323638916015625 score: -0.8547062170894207
[INFO 05-03 16:46:52] ax.service.managed_loop: Running optimization trial 23...
61: time: 0.1977527141571045 score: -0.8565984261392696
[INFO 05-03 16:46:52] ax.service.managed_loop: Running optimization trial 24...
63: time: 0.18506789207458496 score: -0.8711688702616356
[INFO 05-03 16:46:53] ax.service.managed_loop: Running optimization trial 25...
53: time: 0.15570688247680664 score: -0.8638789060710867
[INFO 05-03 16:46:54] ax.service.managed_loop: Running optimization trial 26...
62: time: 0.20318317413330078 score: -0.8659985887767336
[INFO 05-03 16:46:57] ax.service.managed_loop: Running optimization trial 27...
47: time: 0.1617259979248047 score: -0.8511598188533197
[INFO 05-03 16:46:57] ax.service.managed_loop: Running optimization trial 28...
48: time: 0.16547536849975586 score: -0.863244397453913
[INFO 05-03 16:46:59] ax.service.managed_loop: Running optimization trial 29...
49: time: 0.1458437442779541 score: -0.8598930898301996
[INFO 05-03 16:47:00] ax.service.managed_loop: Running optimization trial 30...
44: time: 0.15685534477233887 score: -0.8675086424552814

{'n_estimators': 48}

Can anybody tell me what I do wrong?

Best regards, Felix

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
maxasauruswallcommented, Jul 9, 2021

I experienced the same issue recently, and was a bit confused until I found this thread. Thanks for the heads up!

Also, thanks in general for the great library.

Cheers, Max

1reaction
FelixNeutatzcommented, May 4, 2021

Hi @lena-kashtelyan, thank you for the quick answer. I think that there is some kind of check because the best parameter without considering any constraint would be 63 with r2 of 0.8711688702616356.

Best regards, Felix

Read more comments on GitHub >

github_iconTop Results From Across the Web

needs more documentation, maybe does not return the correct ...
I'm running the Get Started example with optimize and seeing a ... Outcome constraints not respected by suggested best point in Loop API...
Read more >
ax.service - Adaptive Experimentation Platform
Managed optimization loop, in which Ax oversees deployment of trials and ... Obtains the best point encountered in the course of this optimization....
Read more >
Chapter 4. Query Performance Optimization - O'Reilly
This example illustrates how important it is to have good indexes. Good indexes help your queries get a good access type and examine...
Read more >
scipy.optimize.newton — SciPy v1.9.3 Manual
Find a zero of a real or complex function using the Newton-Raphson (or secant or Halley's) method. Find a zero of the scalar-valued...
Read more >
botorch.utils - Bayesian Optimization in PyTorch
For k outcome constraints and m outputs at f(x)` , A is k x m and b is k x 1 such that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found