max_iter limit argument inconsistencies between solvers.
See original GitHub issueThis question likely stems from my poor understanding of sklearn and apologies if this has been answered but I was wondering why for some estimators, such as SVC()
, the max_iter
argument can be set to -1 for removing the limit but for other estimators such as SGDClassifier()
there needs to be a hard limit, otherwise there is the error ValueError: max_iter must be > zero. Got -1.000000
.
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (8 by maintainers)
Top Results From Across the Web
What does the "maxiters" solver option do? - Julia Discourse
maxiters : Maximum number of iterations before stopping. Defaults to 1e5. Is this the maximum number of steps the solver will take?
Read more >Common Solver Options (Solve Keyword Arguments)
These arguments control the output behavior of the solvers. It defaults to maximum output to ... maxiters : Maximum number of iterations before...
Read more >Source code for statsmodels.base.optimizer
`method` determines which solver from scipy.optimize is used. ... of explicit arguments that the basin-hopping solver supports.. maxiter : int The maximum ......
Read more >Using Solver Options - OpenMDAO
iprint = 0: Print only errors or convergence failures. import openmdao.api as om from openmdao.test_suite.components ...
Read more >Set maximum number of function evaluations using baron solver
You can send options to solvers as a dictionary using the options keyword argument for the solve method. Options are passed through to...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Are we confident SGD will always converge within the given tolerance? If not this will produce an infinite loop.
+1 to support maxi_ter=-1 but I’m not so sure about making it a default.
Is this still an open issue? I’d like to chime in with the fact I actually think estimators should have a pre-specified limit on
max_iter
for models. One example for why is thesklearn.svm.SVC
object which when instantiated withmax_iter=-1
can get stuck if it cannot converge. This makes it difficult to employ in Jupyter notebooks where sending an interrupt to the kernel isn’t enough to regain control.