AOT compiled function inside JIT function fails.
See original GitHub issueI am trying to use numba to speed up a project, but I have hit an issue trying to use an AOT-compiled function inside a JIT-compiled function. Is this expected to be supported?
My reason for wanting to do this is that I have a family of physical functions that are very slow to compile and that are pretty much set in stone, with a caller function that is fast to compile and likely to change, which made me want to compile the first set AOT and the caller JIT.
I’ve made a minimal working example with the following structure:
test
-- __init__.py
-- foo_src.py
-- test.py`
foo_src.py contains:
from numba import jit
from numba.pycc import CC
cc = CC('foo')
@jit(nopython=True)
@cc.export('bar', 'i8(i8)')
def bar(x):
return x * 4
if __name__ == "__main__":
cc.compile()`
test.py contains:
from numba import jit
import foo
@jit(nopython=True)
def baz(x):
y = 0
for i in range(x):
y += foo.bar(i)
return y
if __name__ == "__main__":
print (baz(10))`
The error message I get is:
Unknown attribute 'bar' of type Module(<module 'foo' from 'D:\\pythontest\\foo.cp37-win_amd64.pyd'>)
I’ve checked that it works fine with the same functions if make both JIT or both AOT, but the combination seems to fail.
Issue Analytics
- State:
- Created 3 years ago
- Comments:14 (7 by maintainers)
Top Results From Across the Web
Compiling code ahead of time - Numba
AOT compilation only allows for regular functions, not ufuncs. You have to specify function signatures explicitly. Each exported function can have only one ......
Read more >How do I make numba compile python functions before initial ...
this will happen on the call to njit (on module initialization) if you specify the argument types and return types of the function,...
Read more >Ahead-of-time lowering and compilation - JAX documentation
AOT-compiled functions cannot be transformed#. Compiled functions are specialized to a particular set of argument “types,” such as arrays with a specific shape ......
Read more >AOT Compilation Succeeds - But Import Still Slow - Numba
In a library I'm building, Numba AOT compilation succeeds, but the initial import of the library still takes 10-15 seconds. This seems too...
Read more >Identifying JIT compilation failures - IBM
The DJIT shared object shows up as the failing program in a dump when the JIT compiler fails to compiler a method. End...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Fundamentally a Numba AOT library is a Python module compiled via the Python C-extension mechanism. The functions exposed in the AOT library are functions that can be called from the Python interpreter. For each of these functions Numba’s AOT compiler has generated compiled-in call wrappers to convert from the Python representation of objects to the Numba representation, and then more code to deal with converting to Numba’s calling convention, at which point something like the underlying function that would be compiled with
@njit
can be called. There’s no technical reason why the Numba JIT function that is being called from the “Python” function couldn’t be exported in the DSO, however, to call it via that symbol, the Numba calling convention would have to be used (it’s quite involved as Numba has it’s own state representation for things like exceptions and the data model of all the arguments is “flattened”, e.g. a NumPy array representation is something like 6 separate arguments).A function that is “cached” in the Numba cache is essentially the result of a
reduce()
call on aCompileResult
: https://github.com/numba/numba/blob/1f1cee2d06d56633874f8c3a7df24245c6f26bd1/numba/core/compiler.py#L193-L207 most of the size of the cached object will be the serialized library, which is basically just storing the ELF bytes of the function.Numba’s cache isn’t really designed to be distributed, it relies on pickle, which is not intended for long term storage, and it also doesn’t have a specification or versioning. So it isn’t really a viable target for AOT compilation. There’s also the issue that JIT-cached functions are going to contain instructions specific to the CPU on which they were JIT compiled, which means they aren’t necessarily portable for the purposes of redistribution either.
Not necessarily, AOT compiled code targetting a basic instruction set for a given architecture should be callable from JIT functions. This is similar to e.g. using a
ctypes
binding to a system library and calling that from a JIT function.IIRC the reason calling AOT compiled code from JIT compiled fuctions doesn’t currently work is simply that the AOT module wraps the compiled functions in a “CPython wrapper” which permits calling them directly from Python, but prevents them from being called from JIT code.