question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CUDA: Allow dynamic allocation of local arrays

See original GitHub issue

Dynamic allocation is supported on CC 2.0 and above devices, so it would be nice if cuda.local.array shape could be variable instead of constant, e.g.:

@cuda.jit
def kernel(nx, ny):
    arr = cuda.local.array((nx, ny), dtype=dt)

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:8
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
gmarkallcommented, Aug 17, 2021

Implementation in progress, see https://m.youtube.com/watch?v=VdqwDyu1lNw for development up to the current state. Planning to finish off over the next couple of weeks.

1reaction
esccommented, Dec 18, 2020

@UrielMaD thank you for asking about this on the issue tracker. @gmarkall and most of the Numba team will largely be on holiday until the beginning of January so I would recommend checking back then. Best wishes!

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to dynamically allocate arrays inside a kernel?
Allocating memory dynamically in the kernel can be tempting because it allows GPU code to look more like CPU code.
Read more >
Dynamic Shared Memory allocation of more than one array
Allocate shared memory with the combined size of the two arrays. Pass the size of the first array to the kernel as a...
Read more >
Dynamically adjust size of cuda.local.array without ...
In general using a variable for the local array size doesn't work. (I have some WIP towards allowing true dynamic local array allocation,...
Read more >
Use Dynamically Allocated C++ Arrays in Generated Function ...
By default, the generated CUDA code uses the C style emxArray data structure to implement dynamically allocated arrays. Instead, you can choose to...
Read more >
3.3. Memory management — Numba 0.41.0 documentation
Local memory is an area of memory private to each thread. Using local memory helps allocate some scratchpad area when scalar local variables...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found