question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ENH: stopping rule for `dual_annealing` function

See original GitHub issue

Is your feature request related to a problem? Please describe.

Hi!

I am trying to incorporate dual_annealing into my estimation routine. To learn how to use the function and what to expect of it, I tried to test it on a simple quadratic function.

from scipy.optimize import dual_annealing
import numpy as np

def func(x):
    out = x ** 2
    return out

lb = [-10]
ub = [10]

ret = dual_annealing(func, bounds = list(zip(lb, ub)), maxiter = 10000000)

Since it is a straightforward and simple function with unique minimum at x = 0, I was expecting the optimiser to be finished very quickly. Which it did at first (with default maxiter parameter), but it also returned a message ‘Maximum number of iterations reached’. Then, I increased the maxiter as shown above, curious when would it stop. But again, the optimiser ran close to 5,000,000 iterations before reaching the maximum function evaluation threshold.

So, currently, it seems the stopping rule of the optimiser does not depend on “goodness” of the solution, but on number of iterations.

Describe the solution you’d like.

I would have expected the code to stop once the solution got close enough to 0, within some tolerance level that a user can set when calling dual_annealing function. But I did not spot any such parameter in the manual and it doesn’t seem to be the behaviour of the optimiser from the above example.

I read the referenced paper, which seems to have been the basis for the dual_annealing function: Xiang Y, Gubian S, Suomela B, Hoeng J. Generalized Simulated Annealing for Efficient Global Optimization: the GenSA Package for R. In the R package they develop, there is a possibility to set tolerance level.

Or if not tolerance level, then some other kind of stopping rule that would also allow the optimiser to exit if it has good confidence about the solution.

Describe alternatives you’ve considered.

Well, apart from using a different optimiser, the only other alternative I see is to set lower number for maxiter and hope that it is enough to reach a solution. Or run it with different values of maxiter and examine how the solution and function values change between the runs.

But I am not convinced these are good alternatives. It’s easy to do with the function in the example. My actual function takes about 30 seconds to run. Doing it 10,000 times already amounts to 83 hours. I don’t want to impose an assumption that 10,000 times is enough to reach the true solution. But I also don’t want to wait for the solution longer than is necessary.

Additional context (e.g. screenshots, GIFs)

image

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
dschmitz89commented, Mar 12, 2022

One stopping rule for dual_annealing could be to compare the points sampled by the annealer from which the local optimizations start. If no better solution was found within n iterations of the outer annealing process, the algorithm could terminate. CC @sgubianpm ?

0reactions
nurfatimajcommented, Dec 28, 2021

The parameters which are provided are only changing the profile temperature and acceptance criteria (basically how far we look around from some solution).

Thanks! That was what I wasn’t quite sure about and thought maybe I misunderstood some of them.

I’d be very much interested in learning if there are some reasons why one should not use tolerance. Otherwise, could you hint me in a direction of the simplest workaround?

Read more comments on GitHub >

github_iconTop Results From Across the Web

scipy.optimize.dual_annealing — SciPy v1.9.3 Manual
Find the global minimum of a function using Dual Annealing. ... this number will be exceeded, the algorithm will stop just after the...
Read more >
Simulated Annealing
Two noteworthy advantages of simulated annealing are: • It is easy to program, whatever the rules used to define Kn, Tn or the...
Read more >
Efficient Modified Meta-Heuristic Technique for ...
To terminate the simulated-annealing algorithm, several stopping rules are given in the literature. See, for example, [30,31,46]. Almost all ...
Read more >
Two Simulated Annealing Optimization Schemas for ...
In this paper, we propose two simulated annealing schemas (the all-in-one schema and ... these criteria can be either variables or functions or...
Read more >
Efficient sampling-based energy function evaluation for ...
To determine applicability, we prepared and optimized two ensembles for ... and the objective function is derived based on some aggregation rule (e.g., ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found