Function accepting njitted functions as arguments is slow
See original GitHub issue- I am using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
- I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).
I was trying numba 0.38 and the new support for jitted functions as arguments with this code snippet:
# coding: utf-8
from scipy.optimize import newton
from numba import njit
@njit
def func(x):
return x**3 - 1
@njit
def fprime(x):
return 3 * x**2
@njit
def njit_newton(func, x0, fprime):
for _ in range(50):
fder = fprime(x0)
fval = func(x0)
newton_step = fval / fder
x = x0 - newton_step
if abs(x - x0) < 1.48e-8:
return x
x0 = x
get_ipython().run_line_magic('timeit', 'newton(func.py_func, 1.5, fprime=fprime.py_func)')
get_ipython().run_line_magic('timeit', 'newton(func, 1.5, fprime=fprime)')
get_ipython().run_line_magic('timeit', 'njit_newton.py_func(func, 1.5, fprime=fprime)')
get_ipython().run_line_magic('timeit', 'njit_newton(func, 1.5, fprime=fprime)')
And I found surprising that njit_newton
is the slowest of all, while njit_newton.py_func
is the fastest:
$ ipython test_perf.py
4.76 µs ± 8.52 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
4.14 µs ± 30.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
3.58 µs ± 26 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
20 µs ± 85.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
(Inspiration: https://github.com/scipy/scipy/blob/607a21e07dad234f8e63fcf03b7994137a3ccd5b/scipy/optimize/zeros.py#L164-L182)
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:21 (15 by maintainers)
Top Results From Across the Web
Why does using arguments make this function so much slower?
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our ...
Read more >10x slowdown when passing function as argument
I'm trying to understand why passing a function as an argument is sometimes 10x slower ... The problem is that functions are automatically...
Read more >How much do function calls impact performance?
When functions are not inlined, yes there is a performance hit to make a ... let the "slow code" tell you what it...
Read more >How not to be slow using Python: Functions - pawroman.dev
The function using comprehensions wins big time, taking less than half of the mean run time compared to functional approach. The "naive" loop ......
Read more >Programming FAQ — Python 3.11.1 documentation
How do I use strings to call functions/methods? ... My program is too slow. ... Parameters define what kind of arguments a function...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I can confirm that this issue exists. However, as mentioned above, the issue does in fact seem to be caused by a cost when calling Numba jitted code from Python.
The difference in performance when comparing the
foo
functions is great, however, since timeit is called from the Python context these timings are largely affected by Numba invokation costs.The difference in performance when comparing the
bar
functions is minimal, because now most of the time is actually spent in the function and not in interfacing between Numba and Python.For reference, if the functions do any real work, the differences disappear (and strangely reverse, which I cannot explain)