Missing torch.load
See original GitHub issueSimilar with the discussion of the missing torch.save
pytorch torch.load refers to torch\serialization.py
which provide parameter instructions for loading
>>> torch.load('tensors.pt')
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt', 'rb') as f:
... buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
>>> torch.load('module.pt', encoding='ascii')
Currently LibTorchSharp implements one of the possible loading options listed above
Since pickling is an overkill as discussed for .NET
As I am still learning … is there a need to provide more loading options provided through torch.load instead of Module.load in TorchSharp?
I am raising this issue, as I fail to load a saved State_Dict created through exportsd.py back to TorchSharp using Module.Load
I did not get any error message, as the process crashes.
suggestions: is there a need for error messages when loading fail to assist in a more reliable loading state_dict.
Issue Analytics
- State:
- Created 2 years ago
- Comments:17 (8 by maintainers)
Top Results From Across the Web
Missing keys & unexpected keys in state_dict when loading ...
RuntimeError: Error(s) in loading state_dict for VGG: Missing key(s) in ... import torch from torchvision import models model = models.
Read more >Problem with missing and unexpected keys while loading ...
in my case, i had to remove "module." prefix from the state dict to load. model= Model() state_dict = torch.load(model_path) remove_prefix = ' ......
Read more >torch.load with Exception · Issue #53708
Steps to reproduce the behavior: Save the data while the saving process might be killed during saving; Load this data with try and...
Read more >Saving and Loading Models — PyTorch Tutorials 1.0. ...
In PyTorch, the learnable parameters (i.e. weights and biases) of an torch.nn.Module model is contained in the model's parameters (accessed with model.
Read more >How to use the torch.load function in torch
To help you get started, we've selected a few torch.load examples, based on popular ways it is used in public projects.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@GeorgeS2019 – I suggest adding a print statement (on your machine) to the exportsd.py, something like:
and see what the names of all the state_dict entries are, then compare that to your .NET module that you are loading the weights into.
Anything that looks like a <String,Tensor> dictionary and was saved using the format that exportsd.py also uses should be possible to load, but when loading, the keys come from the model instance (either a custom module or Sequential) that the weights are being loaded into. On the saving side, the keys likewise come from the original model.
Thus, the two have to exactly match – that’s the key here. Without seeing the model definition on both sides, it’s hard to help debug it. The best I can do, and I will try to get that into the next release, is to improve the error messages so that they are more informative.