question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issues with SequentialDomainReductionTransformer - one of the lower bounds is greater than an upper bound

See original GitHub issue

High level description of issue

I am wondering how to best identify the cause of and ultimately resolve a persistent error I’m having with:

ValueError: LBFGSB - one of the lower bounds is greater than an upper bound.

This happens after toying around a lot with various parameters to no avail.

Not sure what information is helpful and will update based on anything you all say would be helpful in diagnosing. Included here for now is:

  • Model Background and inputs
  • Some outputs / plot from the real model when it breaks down
  • Some plots I have tried to use to track the issue when the model functions properly
  • Toy model that results in similar error
    • error message

Thanks for any help/insight you can spare!!!

Model Background and inputs

I am not sure which values or what information are most useful for helping identify the problem/solution but will list out a few parameters / information.

  • Information

    • Error only happens with the SequentialDomainReductionTransformer passed to the bounds_transformer of the optimizer (standard optimizer does not throw this error)
    • Error happens even at very low values for init points or when probes are passed
    • the bayes optimizer is optimizing for 14 variables
    • to deal with more nuanced constraints, before a ‘guess’ is evaluated by the model, it is checked by a separate function which determines if the constraints are violated (e.g., variable_a > variable_b > variable_c). If that constraint is violated a ‘spiked’ score is returned rather than evaluating the whole model (to be clear this is happening within the optimization function, there’s just a “if constraints violated” clause at the beginning that decides whether to pass a score of -20,000 or to run the whole model).
  • Parameters:

    • First, I have toyed around with these a lot and still gotten errors, but hopefully this gives an idea of a standard failed run. Please let me know what other information would be useful.
    • To SequentialDomainReductionTransformer:
      • gamma_osc = 0.4
      • gamma_pan = 1.0
      • eta = 0.99
    • to BayesianOptimization:
      • pbounds: (there are 14 but most look like either of the two listed here:
        • 'res#ElectricityRates#ur_monthly_fixed_charge': (0.0, 50.0)
        • 'res#ElectricityRates#period_1#rate': (0.05, 0.35)
      • random_state = np.random.RandomState(100)
    • probes are created dynamically in a separate function and are hard-wired to pass the more nuanced constraints I discussed above (this was a kludge so that the model would have an idea of what parameters would pass, otherwise the model kept returning the spiked score as it kept getting the same score despite changing parameter inputs). So if the constraints was variable_a > variable_b > variable_c, but variable_[a-c] had the same p_bounds of (0.01, 1), then the probes for the variables would be coded to contain points within the following bounds to ensure the optimization function does not pass a spiked score: variable_a: (.8, .9), variable_b: (.5, .8), variable_c: (.01, .4).
    • to optimizer.maximize:
      • init_points = 25
      • n_iter = 100
      • acq = 'ucb'
      • alpha = 1e-1
      • kappa = 10
      • kappa_decay = 0.999
      • kapa_decay_delay = 0

Error from original model

once the error is thrown I can look at bounds_transformer.bounds and find where the lower bounds are exceeding upper bounds ([any([i[0] > i[1] for i in j]) for j in bounds_transformer.bounds]). This is an example of where those bounds are being exceeded:

bounds_transformer.bounds[4]
array([[5.00000000e-02, 1.61679624e-01],
       [1.94822450e-01, 4.44400120e-01],
       [2.03126994e-01, 4.50000000e-01],
       [1.50000000e-01, 9.17096386e-02], # HERE lower bound exceeds upper bound
       [2.87093063e-01, 8.39717864e-01],
       [2.50000000e-01, 7.32347686e-01],
       [1.82423989e+01, 5.43952019e+01],
       [9.27935306e-02, 3.01482727e-01],
       [2.42864682e-01, 4.50000000e-01],
       [2.43336354e-01, 4.50000000e-01],
       [1.50000000e-01, 2.00541524e-01],
       [2.50000000e-01, 7.11258368e-01],
       [3.89103366e-01, 8.50000000e-01],
       [0.00000000e+00, 2.57757652e+01]])

plotting some outputs for that particular variable:

import matplotlib.pyplot as plt
x_min_bound = [b[3][0] for b in bounds_transformer.bounds]
x_max_bound = [b[3][1] for b in bounds_transformer.bounds]
x = [x[3] for x in optimizer.space.params]

plt.plot(x_min_bound[1:], label='x lower bound')
plt.plot(x_max_bound[1:], label='x upper bound')
plt.plot(x[1:], label='x')
plt.legend()

yields the below … no idea why the upper bound and lower bound are crossing over but I assume that that cannot be good!

Figure_1

Plots from when the model works properly

Here are some plots I used to track things from when I lower the init and probe count so the model functions

First here is the final score from the bayes optimization process over iterations (you can see some model inputs across the top): bayes_internal_20220616_135939

Next, here are the individual parameters during that same run bayes_bounds_20220616_135939

Finally, probably not useful, but here are the same variables that were evaluated, but plotted as boxplots (red star corresponds to those values that yielded best score) along with the score on the far right (the whole model tries to minimize the difference between costs and revenues) tariff_internal_results_20220616_135939

Toy model that results in error

The actual model in question is quite large, but I can replicate the LBFGSB error on a toy model shown below (essentially by blowing up the kappa value and including 50+ init points during the maximize call).

import numpy as np
from bayes_opt import BayesianOptimization
from bayes_opt import SequentialDomainReductionTransformer
import matplotlib.pyplot as plt

def optfunc(**kwargs):
    x = np.fromiter(kwargs.values(), dtype=float)
    arg1 = -0.2 * np.sqrt(0.5 * (x[0] ** 2 + x[1] ** 2))
    arg2 = 0.5 * (np.cos(2. * np.pi * x[0]) + np.cos(2. * np.pi * x[1]))
    arg3 = x[2]*x[3]
    arg4 = ((x[4] + x[5])*x[6])/x[7]
    return -1.0 * (-20. * arg1 - arg2 + 20. + np.e + arg3 - arg4)

pbounds = {'x': (-15, 5), 'y': (-5, 15), 'z': (-5, 65), 'a': (-5, 50), 
           'b': (-5, 53), 'c': (-5, 45), 'd': (-75, 5), 'e': (-53, 5)}

bounds_transformer = SequentialDomainReductionTransformer()

mutating_optimizer = BayesianOptimization(
    f=ackley,
    pbounds=pbounds,
    verbose=0,
    random_state=1,
    bounds_transformer=bounds_transformer
)

mutating_optimizer.maximize(
    init_points=100,
    n_iter=10,
    kappa = 1000, kappa_decay = .995, kappa_decay_delay = 0
)

Error message

StopIteration                             
Traceback (most recent call last)
File ~\.conda\envs\la100es_tariffs\lib\site-packages\bayes_opt\bayesian_optimization.py:179, in BayesianOptimization.maximize(self, init_points, n_iter, acq, kappa, kappa_decay, kappa_decay_delay, xi, **gp_params)
    178 try:
--> 179     x_probe = next(self._queue)
    180 except StopIteration:

File ~\.conda\envs\la100es_tariffs\lib\site-packages\bayes_opt\bayesian_optimization.py:25, in Queue.__next__(self)
     24 if self.empty:
---> 25     raise StopIteration("Queue is empty, no more objects to retrieve.")
     26 obj = self._queue[0]

StopIteration: Queue is empty, no more objects to retrieve.

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 mutating_optimizer.maximize(
      2     init_points=100,
      3     n_iter=10,
      4     kappa = 1000, kappa_decay = .995, kappa_decay_delay = 0
      5 )

File ~\.conda\envs\la100es_tariffs\lib\site-packages\bayes_opt\bayesian_optimization.py:182, in BayesianOptimization.maximize(self, init_points, n_iter, acq, kappa, kappa_decay, kappa_decay_delay, xi, **gp_params)
    180 except StopIteration:
    181     util.update_params()
--> 182     x_probe = self.suggest(util)
    183     iteration += 1
    185 self.probe(x_probe, lazy=False)

File ~\.conda\envs\la100es_tariffs\lib\site-packages\bayes_opt\bayesian_optimization.py:131, in BayesianOptimization.suggest(self, utility_function)
    128     self._gp.fit(self._space.params, self._space.target)
    130 # Finding argmax of the acquisition function.
--> 131 suggestion = acq_max(
    132     ac=utility_function.utility,
    133     gp=self._gp,
    134     y_max=self._space.target.max(),
    135     bounds=self._space.bounds,
    136     random_state=self._random_state
    137 )
    139 return self._space.array_to_params(suggestion)

File ~\.conda\envs\la100es_tariffs\lib\site-packages\bayes_opt\util.py:55, in acq_max(ac, gp, y_max, bounds, random_state, n_warmup, n_iter)
     51 x_seeds = random_state.uniform(bounds[:, 0], bounds[:, 1],
     52                                size=(n_iter, bounds.shape[0]))
     53 for x_try in x_seeds:
     54     # Find the minimum of minus the acquisition function
---> 55     res = minimize(lambda x: -ac(x.reshape(1, -1), gp=gp, y_max=y_max),
     56                    x_try.reshape(1, -1),
     57                    bounds=bounds,
     58                    method="L-BFGS-B")
     60     # See if success
     61     if not res.success:

File ~\.conda\envs\la100es_tariffs\lib\site-packages\scipy\optimize\_minimize.py:623, in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
    620     return _minimize_newtoncg(fun, x0, args, jac, hess, hessp, callback,
    621                               **options)
    622 elif meth == 'l-bfgs-b':
--> 623     return _minimize_lbfgsb(fun, x0, args, jac, bounds,
    624                             callback=callback, **options)
    625 elif meth == 'tnc':
    626     return _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,
    627                          **options)

File ~\.conda\envs\la100es_tariffs\lib\site-packages\scipy\optimize\lbfgsb.py:294, in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, finite_diff_rel_step, **unknown_options)
    292 # check bounds
    293 if (new_bounds[0] > new_bounds[1]).any():
--> 294     raise ValueError("LBFGSB - one of the lower bounds is greater than an upper bound.")
    296 # initial vector must lie within the bounds. Otherwise ScalarFunction and
    297 # approx_derivative will cause problems
    298 x0 = np.clip(x0, new_bounds[0], new_bounds[1])

ValueError: LBFGSB - one of the lower bounds is greater than an upper bound.

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:15 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
osullivryancommented, Jun 26, 2022

Okay I found a couple problems with my bounds transformer… it’s a great example of growing as a programmer! 😆

  • Bounds were being mutated during the init_points (exploration) phase. This is wrong, as the queue is already created with the initial bounds. This is why a point can appear to be breaking the bounds - that point was already chosen before the maximizing starts! This becomes very obvious when @MasonBowen had 100 as init_points, but not in my test functions as I only used two. I have fixed the jupyter notebook to show the need for this shift.
  • The problem with the bounds flipping is addressed by a sort within the update. I think this is a good thing to put in place.
  • I like the idea of a minimum window as an input to the SequentialDomain transformer.

Given all of that I have opened a PR for these changes. I will be opening another one in a week or so with improvements to the SequentialDomain code. I don’t like how I wrote it at all.

https://github.com/fmfn/BayesianOptimization/pull/332

2reactions
osullivryancommented, Jun 22, 2022

Hello @MasonBowen , sorry you’re seeing this bug with the domain transformer.

@bwheelz36 , I think you’re right - the domain is flipping and we should probably swap them and add a decay so they don’t converge to the same value. There should also be a warning as to what is happening.

I could probably take a look at it this weekend!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Running the Bayesian optimizer, and when the maximize ...
Running the Bayesian optimizer, and when the maximize function is executed, "one of the lower bounds is greater than an upper bound.
Read more >
StopIteration: Queue is empty error in optimizer.maximize() #270
ValueError : LBFGSB - one of the lower bounds is greater than an upper bound. I found another instance of this issue on...
Read more >
Why are the upper and lower bounds violated during iterations ...
The Active Set (medium scale) algorithm used by FMINCON may violate the bounds in certain cases. · There are generally 3 reasons why...
Read more >
Branch and Bound Methods - Stanford University
Branch and bound algorithms. • methods for global optimization for nonconvex problems. • nonheuristic. – maintain provable lower and upper bounds on global ......
Read more >
Lower Bounds on the Worst-Case Complexity of Efficient ...
In this paper, we study the worst-case complexity of the efficient global optimization problem and, in contrast to existing kernel-specific ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found