question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Missing torch.load

See original GitHub issue

Similar with the discussion of the missing torch.save

pytorch torch.load refers to torch\serialization.py

which provide parameter instructions for loading

Example

        >>> torch.load('tensors.pt')
        # Load all tensors onto the CPU
        >>> torch.load('tensors.pt', map_location=torch.device('cpu'))
        # Load all tensors onto the CPU, using a function
        >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
        # Load all tensors onto GPU 1
        >>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
        # Map tensors from GPU 1 to GPU 0
        >>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
        # Load tensor from io.BytesIO object
        >>> with open('tensor.pt', 'rb') as f:
        ...     buffer = io.BytesIO(f.read())
        >>> torch.load(buffer)
        # Load a module with 'ascii' encoding for unpickling
        >>> torch.load('module.pt', encoding='ascii')

Currently LibTorchSharp implements one of the possible loading options listed above

Since pickling is an overkill as discussed for .NET

As I am still learning … is there a need to provide more loading options provided through torch.load instead of Module.load in TorchSharp?

I am raising this issue, as I fail to load a saved State_Dict created through exportsd.py back to TorchSharp using Module.Load

I did not get any error message, as the process crashes.

suggestions: is there a need for error messages when loading fail to assist in a more reliable loading state_dict.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:17 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
NiklasGustafssoncommented, Oct 8, 2021

@GeorgeS2019 – I suggest adding a print statement (on your machine) to the exportsd.py, something like:

    for entry in sd:
        print(entry)
        stream.write(leb128.u.encode(len(entry)))
        stream.write(bytes(entry, 'utf-8'))
        _write_tensor(sd[entry], stream)

and see what the names of all the state_dict entries are, then compare that to your .NET module that you are loading the weights into.

1reaction
NiklasGustafssoncommented, Oct 8, 2021

Anything that looks like a <String,Tensor> dictionary and was saved using the format that exportsd.py also uses should be possible to load, but when loading, the keys come from the model instance (either a custom module or Sequential) that the weights are being loaded into. On the saving side, the keys likewise come from the original model.

Thus, the two have to exactly match – that’s the key here. Without seeing the model definition on both sides, it’s hard to help debug it. The best I can do, and I will try to get that into the next release, is to improve the error messages so that they are more informative.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Missing keys & unexpected keys in state_dict when loading ...
RuntimeError: Error(s) in loading state_dict for VGG: Missing key(s) in ... import torch from torchvision import models model = models.
Read more >
Problem with missing and unexpected keys while loading ...
in my case, i had to remove "module." prefix from the state dict to load. model= Model() state_dict = torch.load(model_path) remove_prefix = ' ......
Read more >
torch.load with Exception · Issue #53708
Steps to reproduce the behavior: Save the data while the saving process might be killed during saving; Load this data with try and...
Read more >
Saving and Loading Models — PyTorch Tutorials 1.0. ...
In PyTorch, the learnable parameters (i.e. weights and biases) of an torch.nn.Module model is contained in the model's parameters (accessed with model.
Read more >
How to use the torch.load function in torch
To help you get started, we've selected a few torch.load examples, based on popular ways it is used in public projects.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found