Support arbitrary fixed-size integers
See original GitHub issueFeature request
LLVM supports arbitrary (but fixed) bit-size integers, for example you can create 1234-bit integer. Clang also supports arbitrary size integers, just do using u1234 = unsigned _ExtInt(1234); to create 1234-bit integer type u1234.
Would be great if Numba can use this LLVM/Clang feature of defining any-size integers, so that you can write something like dtype = numba.ext_int(1234); value = dtype(789) inside @njit-ed function.
I understand that Numpy doesn’t support int128 or anything larger than int64, but still Numba can use any-sized int for non-Numpy arrays/structures, such as list/set/dict.
For example if I do a = []; a.append(numba.ext_int(1234)(789)) then Numba can convert it to C++ version like std::vector<_ExtInt(1234)> a; a.push_back(789); (BTW, am I right that you’re converting Python code to C++? or you’re using C as backand?).
Same for dict d = {numba.ext_int(123)(456) : numba.ext_int(789)(100)} can be converted to std::unordered_map<_ExtInt(123), _ExtInt(789)> d; d[456] = 100;. Similarly for set().
So to conclude - for Numpy you can disallow _ExtInt(), but for list/dict/set you can allow _ExtInt().
Implementing such arbitrary-sized integer types as a consequence will result in a long-awaited feature int128, of course not within Numpy arrays but at least in list/dict/set.
Also maybe you can suggest some temporary work-around? I expect that Numba can have some low-level support for creating any type, for example through @intrinsic or IRBuilder or other ways of injecting LLVM IR assembly. Maybe Numba gurus can give ready-made example of implementing such feature as _ExtInt() support within current capabilities of Numba at least as @intrinsic?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:5 (3 by maintainers)

Top Related StackOverflow Question
This is indeed what happens.
@intrinsics generate LLVM IR directly into the current block, so they cannot be callable functions, they are conceptually different as their purpose is different.@jitclassjust uses the same technology as the rest of the compiler, its methods are essentially just@njitfunctions, LLVM will likely just inline them. I’d encourage writing actual use cases and timing performance etc before worrying about the impact of code generation.LLVM makes decisions about optimisation/inlining based on its analysis of the generated IR.
LLVM will likely just figure out it can be inlined. Example:
The
x86_64disassembly forfoolooks like:i.e. LLVM inlined everything to do with the jitclass and then unrolled the accumulator loop.
Perhaps consider using https://numba.discourse.group for discussion items like this, other users who do not follow the issue tracker may be interested. Thanks.
Yes but the type system would still most likely need a new type to be registered for this “big int” so as to be able to describe the field in the
jitclass.I’m pretty sure the methods cannot be
@intrinsicsbut they can call@intrinsics.Potentially, yes.
I think that is correct as of version 0.54.x.
Yes, this is possible but at present would likely be quite involved. It’s also going to be limited by what LLVM can handle for a given architecture. For example a
udivon thei1234type above is not supported in LLVM on at least linuxx86_64under LLVM version 11.