Cythonize scalar function root finders
See original GitHub issue@person142 responsed to #8354 that his preference was for a cython optimize api.
This came up before: #7242.
As you can see I was against the idea. Handling all reasonable halting conditions for a scalar solver is already a PITA, and I think the problem gets much worse for a vectorized scalar solver. IMO it is better to provide a Cython API to a scalar solver and let users handle looping over it.
#8357 only vectorized Newton methods. #8431 allows users to use a “tight loop” in C with Brent, Ridder, and other root finders.
We decided it was okay to split up Newton
Is it okay to split up Newton, Halley’s, and secant into different calls in cython_optimize?
I am personally pro this (wish it had been done that way in optimize), but maybe others will feel more strongly that the APIs should match.
We discussed which callbacks to support and the fact that cython_optimize should be pure C so it can free the GIL in this comment
This commit has links to annotated html that shows that zeros_struct and zeros_array are pure C, no Python so they can release the GIL to call Cython prange
The Cython optimize API in #8431 was discussed in this SciPy-Dev post and this one too when I specifically asked about callback signatures.
Issue Analytics
- State:
- Created 5 years ago
- Comments:21 (21 by maintainers)

Top Related StackOverflow Question
@mikofski did you mean to close this issue?
After reading through the whole thing again, my summary is:
brentqin particular)It’s still hard to weigh the pros and cons of each approach, given the multiple use cases. We should try and write a summary document together I think, because my head hurts …
If I were doing this today, I would only implement brentq and bisect. Back when I wrote these in (2002?), I just threw in everything, just because.