Support __cuda_array_interface__ on GPU
See original GitHub issuehttps://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html
It would not be hard to make DeviceArray
implement this interface on GPU.
It would be slightly harder to support wrapping a DeviceArray
around an existing CUDA array, but not that hard.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:15
- Comments:12 (6 by maintainers)
Top Results From Across the Web
CUDA Array Interface (Version 2) - Numba
The cuda array interface is created for interoperability between different implementation of GPU array-like objects in various projects. The idea is borrowed ...
Read more >CUDA Array Interface (Version 3) - Numba documentation
The CUDA Array Interface (or CAI) is created for interoperability between different implementations of CUDA array-like objects in various projects. The idea is ......
Read more >Support the __cuda_array_interface__ protocol #29039 - GitHub
A number of GPU array computing libraries in Python (Numba, CuPy, PyTorch, RAPIDS) support the __cuda_array_interface__ , protocol as ...
Read more >CUDA C++ Programming Guide - NVIDIA Documentation Center
The programming guide to the CUDA model and interface. ... CUDA is designed to support various languages and application programming interfaces. ......
Read more >Interoperability — CuPy 11.4.0 documentation
Numba is a Python JIT compiler with NumPy support. cupy.ndarray implements __cuda_array_interface__ , which is the CUDA array interchange interface compatible ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thanks John! Yeah we just finished a GPU Hackathon, and a few of our teams evaluating JAX asked us why JAX can’t work with other libraries like CuPy and PyTorch bidirectionally. It’d be very useful, say, to do autograd in JAX, postprocess in CuPy, then bring it back to JAX.
Also: I haven’t tried this, but since CuPy supports both
__cuda_array_interface__
and DLPack, you can most likely “launder” an array via CuPy into JAX:__cuda_array_interface__
to CuPy.(Obviously this isn’t ideal, but it might unblock you.)