question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Interoperability with PyTorch memory pool

See original GitHub issue

Both CuPy and PyTorch have its memory pool. For interoperability, it would be better if the memory pool can be shared.

Currently we provide a way to use PyTorch memory pool as a CuPy memory pool. One idea is to move this code to CuPy code base (maybe under cupyx). https://github.com/chainer/chainer-pytorch-migration/blob/master/chainer_pytorch_migration/allocator.py

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:12 (12 by maintainers)

github_iconTop GitHub Comments

2reactions
leofangcommented, Mar 10, 2020

Would this issue be closed by #3126?

2reactions
niboshicommented, Jan 9, 2020

I agree in that CuPy and PyTorch should not depend on each other. So I think options 1. 2., and 4. should be avoided.

Another library (option 3.) is not necessary either.

My suggestion is to make PyTorch expose its bare allocators and wrap them with cupy.cuda.memory.CFunctionAllocator, just as done with ChainerX.

https://github.com/chainer/chainer/blob/53521cbfd7827c613396f6afcba04ca2362a612a/chainerx/_cuda.py#L29-L34

Read more comments on GitHub >

github_iconTop Results From Across the Web

Machine Learning Frameworks Interoperability, Part 1
We start with this post discussing pros and cons of distinct memory layouts as well as memory pools for asynchronous memory allocation to ......
Read more >
Interoperability — CuPy 11.4.0 documentation
pytorch-pfn-extras library provides additional integration features with PyTorch, including memory pool sharing and stream sharing:.
Read more >
CUDA (CuPy Interoperability) - pytorch-pfn-extras
Use PyTorch's memory pool in CuPy. If you want to use PyTorch's memory pool and non-default CUDA streams, streams must be created and...
Read more >
CUDA semantics — PyTorch 1.13 documentation
Memory management. PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations.
Read more >
PyTorch, TensorFlow & MXNet · Thinc · A refreshing functional ...
This can occur because both PyTorch and cupy reserve their own internal memory pools, and the two libraries do not communicate with each...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found