CUDA: Allow dynamic allocation of local arrays
See original GitHub issueDynamic allocation is supported on CC 2.0 and above devices, so it would be nice if cuda.local.array
shape could be variable instead of constant, e.g.:
@cuda.jit
def kernel(nx, ny):
arr = cuda.local.array((nx, ny), dtype=dt)
Issue Analytics
- State:
- Created 3 years ago
- Reactions:8
- Comments:6 (6 by maintainers)
Top Results From Across the Web
How to dynamically allocate arrays inside a kernel?
Allocating memory dynamically in the kernel can be tempting because it allows GPU code to look more like CPU code.
Read more >Dynamic Shared Memory allocation of more than one array
Allocate shared memory with the combined size of the two arrays. Pass the size of the first array to the kernel as a...
Read more >Dynamically adjust size of cuda.local.array without ...
In general using a variable for the local array size doesn't work. (I have some WIP towards allowing true dynamic local array allocation,...
Read more >Use Dynamically Allocated C++ Arrays in Generated Function ...
By default, the generated CUDA code uses the C style emxArray data structure to implement dynamically allocated arrays. Instead, you can choose to...
Read more >3.3. Memory management — Numba 0.41.0 documentation
Local memory is an area of memory private to each thread. Using local memory helps allocate some scratchpad area when scalar local variables...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Implementation in progress, see https://m.youtube.com/watch?v=VdqwDyu1lNw for development up to the current state. Planning to finish off over the next couple of weeks.
@UrielMaD thank you for asking about this on the issue tracker. @gmarkall and most of the Numba team will largely be on holiday until the beginning of January so I would recommend checking back then. Best wishes!