question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Pinned memory allocation returns odd size

See original GitHub issue

It looks like CuPy allocates more bytes than expected when calling cupy.cuda.alloc_pinned_memory. Any ideas why that might be?

In [1]: import numpy 
   ...: import cupy                                                             

In [2]: a = cupy.arange(1_000_000)                                              

In [3]: b = numpy.asarray(cupy.cuda.alloc_pinned_memory(a.nbytes)).view(a.dtype)
   ...:                                                                         

In [4]: b.nbytes                                                                
Out[4]: 8388608

In [5]: a.nbytes                                                                
Out[5]: 8000000
  • Conditions (you can just paste the output of python -c 'import cupy; cupy.show_config()')
    • CuPy version: 7.6.0
    • OS/Platform: Ubuntu 18.04.4 LTS
    • CUDA version: 10.2
    • cuDNN/NCCL version (if applicable): 7.6.5/2.5.7.1
CuPy Version          : 7.6.0
CUDA Root             : /datasets/jkirkham/miniconda/envs/rapids15dev
CUDA Build Version    : 10020
CUDA Driver Version   : 10020
CUDA Runtime Version  : 10020
cuBLAS Version        : 10202
cuFFT Version         : 10102
cuRAND Version        : 10102
cuSOLVER Version      : (10, 3, 0)
cuSPARSE Version      : 10301
NVRTC Version         : (10, 2)
cuDNN Build Version   : 7605
cuDNN Version         : 7605
NCCL Build Version    : 2406
NCCL Runtime Version  : 2507
CUB Version           : None
cuTENSOR Version      : None

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:10 (10 by maintainers)

github_iconTop GitHub Comments

2reactions
leofangcommented, Mar 11, 2021

if it’s better to provide this (equivalent to Leo’s snippet) as an API under cupyx

I sent #4870 (still WIP) to address this need.

2reactions
leofangcommented, Jul 27, 2020

@kmaehashi It is very useful to back NumPy arrays by pinned memory. I use this very often when knowing in advance there’ll be frequent device-host transfer following.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Max amount of host pinned memory available for allocation
I split one large 17 GB pinned host allocation into two 8.5 GB allocations and the total allocation time was cut almost by...
Read more >
About pinned memory in CUDA, is there an upper limit on it?
There is no maximum pinned memory limit in CUDA. It is determined by the amount of main memory your machine have, the memory...
Read more >
Memory Management — CuPy 11.4.0 documentation
The memory allocator function should take 1 argument (the requested size in bytes) and return cupy.cuda.MemoryPointer / cupy.cuda.PinnedMemoryPointer . CuPy ...
Read more >
Comparing unified, pinned, and host/device memory ...
Pinned or zero-copy memory is a older technology introduced in CUDA 2 which also allows GPU access without memory copies. Memory is allocated...
Read more >
Memory management - Numba
An allocation failed due to out-of-memory error. Allocation is retried after flushing all deallocations. · The deallocation queue has reached its maximum size, ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found