Interoperability with PyTorch memory pool
See original GitHub issueBoth CuPy and PyTorch have its memory pool. For interoperability, it would be better if the memory pool can be shared.
Currently we provide a way to use PyTorch memory pool as a CuPy memory pool. One idea is to move this code to CuPy code base (maybe under cupyx
). https://github.com/chainer/chainer-pytorch-migration/blob/master/chainer_pytorch_migration/allocator.py
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (12 by maintainers)
Top Results From Across the Web
Machine Learning Frameworks Interoperability, Part 1
We start with this post discussing pros and cons of distinct memory layouts as well as memory pools for asynchronous memory allocation to ......
Read more >Interoperability — CuPy 11.4.0 documentation
pytorch-pfn-extras library provides additional integration features with PyTorch, including memory pool sharing and stream sharing:.
Read more >CUDA (CuPy Interoperability) - pytorch-pfn-extras
Use PyTorch's memory pool in CuPy. If you want to use PyTorch's memory pool and non-default CUDA streams, streams must be created and...
Read more >CUDA semantics — PyTorch 1.13 documentation
Memory management. PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations.
Read more >PyTorch, TensorFlow & MXNet · Thinc · A refreshing functional ...
This can occur because both PyTorch and cupy reserve their own internal memory pools, and the two libraries do not communicate with each...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Would this issue be closed by #3126?
I agree in that CuPy and PyTorch should not depend on each other. So I think options 1. 2., and 4. should be avoided.
Another library (option 3.) is not necessary either.
My suggestion is to make PyTorch expose its bare allocators and wrap them with
cupy.cuda.memory.CFunctionAllocator
, just as done with ChainerX.https://github.com/chainer/chainer/blob/53521cbfd7827c613396f6afcba04ca2362a612a/chainerx/_cuda.py#L29-L34