cuda.jit does NOT preserve the original __doc__ and __module__ of the decorated function
See original GitHub issue- [ x] I am using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
- [ x] I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).
I noticed this issue when using pdoc3 to generate documentation. pdoc3
works by extracting the __doc__
attribute. Consider the following example.
from numba import cuda, jit, njit
@jit
def add1(a: int, b: int) -> int:
"""Add two integers
Args:
a (int): one integer
b (int): the other integer
Returns:
int: the sum
"""
return a + b
@njit
def add2(a: int, b: int) -> int:
"""Add two integers
Args:
a (int): one integer
b (int): the other integer
Returns:
int: the sum
"""
return a + b
@cuda.jit
def add3(a: int, b: int) -> int:
"""Add two integers
Args:
a (int): one integer
b (int): the other integer
Returns:
int: the sum
"""
return a + b
print('add1\n', add1.__doc__)
print('add2\n', add2.__doc__)
print('add3\n', add3.__doc__)
Both add1
and add2
keep the __doc__
of the original function. However, add3
does not.
add1
Add two integers
Args:
a (int): one integer
b (int): the other integer
Returns:
int: the sum
add2
Add two integers
Args:
a (int): one integer
b (int): the other integer
Returns:
int: the sum
add3
CUDA Kernel object. When called, the kernel object will specialize itself
for the given arguments (if no suitable specialized version already exists)
& compute capability, and launch on the device associated with the current
context.
Kernel objects are not to be constructed by the user, but instead are
created using the :func:`numba.cuda.jit` decorator.
Is it quick or possible to fix this issue? Thank you.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
How to run numba.jit decorated function on GPU?
1 Answer 1 · You have to explicitly import the cuda module from numba to use it (this isn't specific to numba, all...
Read more >Troubleshooting and tips — Numba 0.50.1 documentation
In order to debug code, it is possible to disable JIT compilation, which makes the jit decorator (and the njit decorator) act as...
Read more >Suppressing Deprecation warnings - Numba documentation
jit decorator has for a long time followed the behaviour of first attempting to compile the decorated function in nopython mode and should...
Read more >1-Introduction to CUDA Python with Numba | Kaggle
Numba does not replace your Python interpreter, but is just another Python ... from numba import jit import math # This is the...
Read more >Seven Things You Might Not Know about Numba
If you pass a NumPy array to a CUDA function, Numba will allocate the GPU ... Passing debug=True to the @numba.cuda.jit decorator will...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@kernc @sklam I was to work on it, from the description here it sounds like there a need to add functools.wraps to the decorators, am i right?
I think this is somewhat related to #5755