Generic support for JIT compilation with custom NumPy ops
See original GitHub issueIt would be great to be able to jit
functions that make use of custom CPU operation, i.e., implemented with NumPy arrays. This would be a really valuable extension point for integrating JAX with existing codes/algorithms, and would possibly solve the final remaining use-cases for autograd.
Right now, you can use custom CPU operations if you don’t jit
, but that adds a very large amount of dispatch overhead.
My understanding is this could be possible by making use of XLA’s CustomCall support.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Types and signatures - Numba
Exactly which kind of signature is allowed depends on the context (AOT or JIT compilation), but signatures always involve some representation of Numba...
Read more >Speed Up your Algorithms Part 2— Numba | by Puneet Grover
With Numba, you can speed up all of your calculation focused and computationally heavy python functions(eg loops). It also has support for numpy...
Read more >Just-In-Time Compilation of NumPy Vector Operations
In this paper, we introduce JIT compilation for the high-productivity framework Python/NumPy in order to boost the performance significantly ...
Read more >A Map of the Numba Repository
To help orient developers, this document will try to summarize where different ... JIT compilation of Python classes; numba/core/generators.py - Support for ...
Read more >Getting Started with NumPyro
Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU. Docs and Examples | Forum. What is NumPyro?¶. NumPyro...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I think we can consider this fixed by the new (experimental) version of
host_callback.call
, e.g.,Prints:
inside myprint: <class 'numpy.ndarray'> [0 1 2 3 4 5 6 7 8 9]
Please give this a try and file new issues CCing @gnecula if you encounter any issues!
Bump on this.
We would like to be able to call python code like how
tf.py_func
works. I understand that type/shape inference is one of the blockers here, but having some kind of “manual” typing would work for us.So something like