The role of `cupy_backends.cuda` namespace
See original GitHub issuecupy_backends.cuda
is a useful namespace. Wouldn’t it be a good idea to cut out features that could be used in other libraries? #3584 is an example implementation of this.
How about including the following items in cupy_backends.cuda
?
- An object with a life cycle. It hides the create-destroy method call. For example, Stream/Event/Memory etc.
- Thin Pythonic wrappers. Like this. For example, compiler/profiler etc.
I’d like to get opinions.
This PR relates #3385 .
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:8 (6 by maintainers)
Top Results From Across the Web
Namespace in CUDA - NVIDIA Developer Forums
Hi everyone, i've some trouble with CUDA, i'm trying to launch a __global __ function from a __device __ function called by an...
Read more >cv::cuda Namespace Reference - OpenCV
The class discriminates between foreground and background pixels by building and maintaining a model of the background. More... class, BackgroundSubtractorGMG.
Read more >Namespaces as template parameters in CUDA - Stack Overflow
Show activity on this post. In C++, it is impossible to pass a namespace as some sort of parameter (by means of templates...
Read more >thrust::system::cuda Namespace Reference
thrust::system::cuda is the namespace containing functionality for allocating, manipulating, and deallocating memory available to Thrust's CUDA backend system.
Read more >Warnings about type in anonymous namespace when using ...
The CUDA compiler will replace an extended lambda expression with an instance of a placeholder type defined in namespace scope, before invoking ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’d be extremely interested in splitting this off into a separate library that we can align the CUDA-Python ecosystem around. I can commit from the RAPIDS side that we’d be interested in helping to maintain this library and contributing to it as well as adopting it across our libraries.
This may also be something NVIDIA would be interested in maintaining directly to align with CUDA releases. I’m happy to shepherd those conversations into NVIDIA internally, but can’t guarantee that anyone will engage here publicly other than myself.
A few notes though based on the current implementations:
cupy_backends.cuda
library into multiple libraries for the different CUDA modules (driver API, runtime API, cuBLAS, cuRAND, cuDNN, etc.). This would allow people to rely on only the specific pieces that they need. I.E. Numba has historically only used the driver API so they wouldn’t need all of the other pieces provided by the currentcupy_backends.cuda
.Given no work has been done and we’re still in the design and discussion phase, it’s definitely something that we can consider as a community. In general if we build something that no one wants to adopt then it doesn’t really solve any problems 😅.
To be clear, I’d love to unite everyone in using the same high level wrappers around the CUDA APIs / data types. I.E. things like a CUDA Stream, CUDA Event, CUDA Device Memory Pointer, etc. It would be great if we all were able to share the same Cython and Python interfaces so it makes interoperation as seamless as possible. Obviously there’s a non-trivial amount of work to do to get to that point, but we need to start somewhere 😄.