DOC: optimize.minimize: recipe for duplicating work when calling `fun`, `jac`, and `hess` at same `x`
See original GitHub issueHere is a simple optimization example (Poisson regression):
# loss function and gradient
def f(b):
Xb = np.dot(X, b)
exp_Xb = np.exp(Xb)
loss = exp_Xb.sum() - np.dot(y, Xb)
grad = np.dot(X.T, exp_Xb - y)
return loss, grad
# hessian
def hess(b):
return np.dot(X.T, np.exp(np.dot(X, b))[:, None]*X)
# optimize
result = minimize(f, np.zeros(p), jac=True, hess=hess, method='newton-cg')
This works, but since np.exp(np.dot(X, b))
is already computed inside f(b)
, it would be more efficient and more readable to be able to do:
# loss function and gradient
def f(b):
Xb = np.dot(X, b)
exp_Xb = np.exp(Xb)
loss = exp_Xb.sum() - np.dot(y, Xb)
grad = np.dot(X.T, exp_Xb - y)
hess = np.dot(X.T, exp_Xb[:, None] * X)
return loss, grad, hess
# optimize
result = minimize(f, np.zeros(p), jac=True, hess=True, method='newton-cg')
But this doesn’t seem supported, right now I get:
TypeError: 'bool' object is not callable
Issue Analytics
- State:
- Created 5 years ago
- Comments:10 (6 by maintainers)
Top Results From Across the Web
scipy/_minimize.py at main - GitHub
- minimize_scalar : minimization of a function of one variable. def minimize(fun, x0, args=(), method=None, jac=None, hess=None,
Read more >Jacobian and Hessian inputs in `scipy.optimize.minimize`
minimize function. I am adapting the example at the bottom of the help page. The dogleg method requires a Jacobian and Hessian argument ......
Read more >Practical Optimizatio Routines - Duke People
We will assume that our optimization problem is to minimize some univariate or multivariate function f(x). This is without loss of generality, since...
Read more >optimize.py
class MemoizeJac(object): """ Decorator that caches the value gradient of function ... The function computed is:: sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 ...
Read more >minimize(method='dogleg') — SciPy v0.18.1 Reference Guide
scipy.optimize.minimize(fun, x0, args=(), method='dogleg', jac=None, hess=None, tol=None, callback=None, options={}). Minimization of scalar function of one ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
To be completely sincere, I think there is not much to be gained in term of performance along these lines. And I think it give the user the false impresion that he is actually getting a better performace while doing this
The way of really getting the best performance is something like:
But this would need to be done for your specific problem.
Yes, to be completely since I don’t think the option
jac=True
is that great of a option neitherIn summary, I think we would make the interface more complicated. It would give the users the impression that they would get some improvement in performance, while I don’t think it will make much of a difference.
Maybe a best option would be to include in the documentation a paragraph or two explaining how to do the above trick I just mentioned
@antonior92 quick question:
core_computations
in your example above should haveself.x = x
at the end, right?I think this should go in the tutorial because it is relevant to many
optimize
functions.@AtsushiSakai you’ve written some tutorials for optimize. Would you be willing to submit a PR for this? I think we can merge it quite quickly.