question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Recommended way to develop a cupy wrapper for an original cuda kernel

See original GitHub issue

I’m a NN developer who want to transport a pytorch Module (https://github.com/jonas-koehler/s2cnn) to chainer.

The original Module uses a homemade kernel, wrapped by pynvrtc and cupy.cuda.function.Module.get_function().

In searching information for the transportation, I found a PR which refer not to use pynvrtc in cupy. https://github.com/cupy/cupy/pull/33#issuecomment-301306224

My question is which is the better to use pynvrtc or cupy.cuda.compiler to create an original kernel used for chainer.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
kmaehashicommented, May 22, 2018

Currently these low-level APIs are undocumented as it is intended for private use in CuPy. However it is a simple API that you only need to pass the CUDA code. You can find some examples in: https://github.com/cupy/cupy/blob/v5.0.0a1/cupy/core/core.pyx#L4339-L4356 If you have multiple GPUs with different compute capability, make sure to run compilation for each device.

0reactions
fiarabbitcommented, Jul 25, 2018

Thank you very much! That was really I hoped!

Read more comments on GitHub >

github_iconTop Results From Across the Web

User-Defined Kernels — CuPy 11.4.0 documentation
User-Defined Kernels#. CuPy provides easy ways to define three types of CUDA kernels: elementwise kernels, reduction kernels and raw kernels.
Read more >
CuPy Documentation - Read the Docs
CuPy provides easy ways to define three types of CUDA kernels: elementwise kernels, reduction kernels and raw kernels. In this documentation ...
Read more >
Tutorial: CUDA programming in Python with numba and cupy
Using the GPU can substantially speed up all kinds of numerical problems. Conventional wisdom dictates that for fast numerics you need to be ......
Read more >
Writing CUDA kernels in Python with Numba - YouTube
Dynamic Data Structures on the GPU · Tutorial: CUDA programming in Python with numba and cupy · Make Python code 1000x Faster with...
Read more >
CUDA C++ Best Practices Guide
The programming guide to using the CUDA Toolkit to obtain the best performance ... from CUDA, focus first on finding ways to parallelize...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found