question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

The role of `cupy_backends.cuda` namespace

See original GitHub issue

cupy_backends.cuda is a useful namespace. Wouldn’t it be a good idea to cut out features that could be used in other libraries? #3584 is an example implementation of this.

How about including the following items in cupy_backends.cuda?

  • An object with a life cycle. It hides the create-destroy method call. For example, Stream/Event/Memory etc.
  • Thin Pythonic wrappers. Like this. For example, compiler/profiler etc.

I’d like to get opinions.

This PR relates #3385 .

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:1
  • Comments:8 (6 by maintainers)

github_iconTop GitHub Comments

8reactions
kkraus14commented, Jul 13, 2020

I’d be extremely interested in splitting this off into a separate library that we can align the CUDA-Python ecosystem around. I can commit from the RAPIDS side that we’d be interested in helping to maintain this library and contributing to it as well as adopting it across our libraries.

This may also be something NVIDIA would be interested in maintaining directly to align with CUDA releases. I’m happy to shepherd those conversations into NVIDIA internally, but can’t guarantee that anyone will engage here publicly other than myself.

A few notes though based on the current implementations:

  • The hip backend is likely out of scope for this new split out library if NVIDIA and/or the RAPIDS teams were involved.
  • We’d want to split the currently single cupy_backends.cuda library into multiple libraries for the different CUDA modules (driver API, runtime API, cuBLAS, cuRAND, cuDNN, etc.). This would allow people to rely on only the specific pieces that they need. I.E. Numba has historically only used the driver API so they wouldn’t need all of the other pieces provided by the current cupy_backends.cuda.
5reactions
kkraus14commented, Jul 14, 2020

@kkraus14 Just curious, is it possible to still make them stay in the same repo (much easier to maintain), but generate multiple packages in a release tag? Somewhat similar to multiple outputs in a conda recipe?

Given no work has been done and we’re still in the design and discussion phase, it’s definitely something that we can consider as a community. In general if we build something that no one wants to adopt then it doesn’t really solve any problems 😅.

This is not our concern here. Even though some major players such as CuPy and Numba share common code like these, we all have our own backward compatibility to maintain. So, Numba simply would not adopt the high-level Python wrappers regardless how thin they are I am afraid, but they can easily adopt the low-level CUDA utilities that are not directly exposed to the end users, which is supposedly one of the reasons why Keith is so strongly motivated. 🙂

To be clear, I’d love to unite everyone in using the same high level wrappers around the CUDA APIs / data types. I.E. things like a CUDA Stream, CUDA Event, CUDA Device Memory Pointer, etc. It would be great if we all were able to share the same Cython and Python interfaces so it makes interoperation as seamless as possible. Obviously there’s a non-trivial amount of work to do to get to that point, but we need to start somewhere 😄.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Namespace in CUDA - NVIDIA Developer Forums
Hi everyone, i've some trouble with CUDA, i'm trying to launch a __global __ function from a __device __ function called by an...
Read more >
cv::cuda Namespace Reference - OpenCV
The class discriminates between foreground and background pixels by building and maintaining a model of the background. More... class, BackgroundSubtractorGMG.
Read more >
Namespaces as template parameters in CUDA - Stack Overflow
Show activity on this post. In C++, it is impossible to pass a namespace as some sort of parameter (by means of templates...
Read more >
thrust::system::cuda Namespace Reference
thrust::system::cuda is the namespace containing functionality for allocating, manipulating, and deallocating memory available to Thrust's CUDA backend system.
Read more >
Warnings about type in anonymous namespace when using ...
The CUDA compiler will replace an extended lambda expression with an instance of a placeholder type defined in namespace scope, before invoking ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found