Falling back to cludpickle when serializing a GPU torch tensor
See original GitHub issueimport ray
import torch
ray.init()
a = torch.randn(5)
b = a.cuda()
ray.put(b)
will print
2019-08-23 20:39:14,208 WARNING worker.py:436 -- WARNING: Serializing the class <class 'torch.Tensor'> failed, falling back to cloudpickle.
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Serialization semantics — PyTorch 1.13 documentation
This note describes how you can save and load PyTorch tensors and module states in Python, and how to serialize Python modules so...
Read more >How to move PyTorch model to GPU on Apple M1 chips?
As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this ......
Read more >Source code for torch.serialization
_six import string_classes as _string_classes from torch. ... Otherwise, :func:`torch.load` will fall back to the default behavior, ...
Read more >torch.serialization — PyTorch master documentation
Otherwise, :math:`torch.load` will fall back to the default behavior, ... note:: When you call :meth:`torch.load()` on a file which contains GPU tensors, ...
Read more >PyTorch Types - Flyte
This is not very efficient, and hence, we added PyTorch's serialization and deserialization support ... Tensor: return model.weight class MyModel(torch.nn.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@dhroth suggested that we just raise an error and tell people to call
.cpu()
on the tensor. I think this makes sense as it’s not clear what the expected behavior is. E.g.,should the deserialized tensor live in CPU or GPU memory? What if the worker that deserializes the tensor doesn’t have access to a GPU? Etc.Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you’d still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray’s public slack channel.
Thanks again for opening the issue!