question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

load mvit pretrained model,some error occurred

See original GitHub issue

Running the code below:

from pytorchvideo.models.hub import mvit_base_16x4
model = mvit_base_16x4(pretrained=True)

the output is:

RuntimeError: Error(s) in loading state_dict for MultiscaleVisionTransformers:
	Missing key(s) in state_dict: "blocks.0.attn._attention_pool_k.pool.weight", "blocks.0.attn._attention_pool_k.norm.weight", "blocks.0.attn._attention_pool_k.norm.bias", "blocks.0.attn._attention_pool_v.pool.weight", "blocks.0.attn._attention_pool_v.norm.weight", "blocks.0.attn._attention_pool_v.norm.bias", "blocks.1.attn._attention_pool_q.pool.weight", "blocks.1.attn._attention_pool_q.norm.weight", "blocks.1.attn._attention_pool_q.norm.bias", "blocks.1.attn._attention_pool_k.pool.weight", "blocks.1.attn._attention_pool_k.norm.weight", "blocks.1.attn._attention_pool_k.norm.bias", "blocks.1.attn._attention_pool_v.pool.weight", "blocks.1.attn._attention_pool_v.norm.weight", "blocks.1.attn._attention_pool_v.norm.bias", "blocks.2.attn._attention_pool_k.pool.weight", "blocks.2.attn._attention_pool_k.norm.weight", "blocks.2.attn._attention_pool_k.norm.bias", "blocks.2.attn._attention_pool_v.pool.weight", "blocks.2.attn._attention_pool_v.norm.weight", "blocks.2.attn._attention_pool_v.norm.bias", "blocks.3.attn._attention_pool_q.pool.weight", "blocks.3.attn._attention_pool_q.norm.weight", "blocks.3.attn._attention_pool_q.norm.bias", "blocks.3.attn._attention_pool_k.pool.weight", "blocks.3.attn._attention_pool_k.norm.weight", "blocks.3.attn._attention_pool_k.norm.bias", "blocks.3.attn._attention_pool_v.pool.weight", "blocks.3.attn._attention_pool_v.norm.weight", "blocks.3.attn._attention_pool_v.norm.bias", "blocks.4.attn._attention_pool_k.pool.weight", "blocks.4.attn._attention_pool_k.norm.weight", "blocks.4.attn._attention_pool_k.norm.bias", "blocks.4.attn._attention_pool_v.pool.weight", "blocks.4.attn._attention_pool_v.norm.weight", "blocks.4.attn._attention_pool_v.norm.bias", "blocks.5.attn._attention_pool_k.pool.weight", "blocks.5.attn._attention_pool_k.norm.weight", "blocks.5.attn._attention_pool_k.norm.bias", "blocks.5.attn._attention_pool_v.pool.weight", "blocks.5.attn._attention_pool_v.norm.weight", "blocks.5.attn._attention_pool_v.norm.bias", "blocks.6.attn._attention_pool_k.pool.weight", "blocks.6.attn._attention_pool_k.norm.weight", "blocks.6.attn._attention_pool_k.norm.bias", "blocks.6.attn._attention_pool_v.pool.weight", "blocks.6.attn._attention_pool_v.norm.weight", "blocks.6.attn._attention_pool_v.norm.bias", "blocks.7.attn._attention_pool_k.pool.weight", "blocks.7.attn._attention_pool_k.norm.weight", "blocks.7.attn._attention_pool_k.norm.bias", "blocks.7.attn._attention_pool_v.pool.weight", "blocks.7.attn._attention_pool_v.norm.weight", "blocks.7.attn._attention_pool_v.norm.bias", "blocks.8.attn._attention_pool_k.pool.weight", "blocks.8.attn._attention_pool_k.norm.weight", "blocks.8.attn._attention_pool_k.norm.bias", "blocks.8.attn._attention_pool_v.pool.weight", "blocks.8.attn._attention_pool_v.norm.weight", "blocks.8.attn._attention_pool_v.norm.bias", "blocks.9.attn._attention_pool_k.pool.weight", "blocks.9.attn._attention_pool_k.norm.weight", "blocks.9.attn._attention_pool_k.norm.bias", "blocks.9.attn._attention_pool_v.pool.weight", "blocks.9.attn._attention_pool_v.norm.weight", "blocks.9.attn._attention_pool_v.norm.bias", "blocks.10.attn._attention_pool_k.pool.weight", "blocks.10.attn._attention_pool_k.norm.weight", "blocks.10.attn._attention_pool_k.norm.bias", "blocks.10.attn._attention_pool_v.pool.weight", "blocks.10.attn._attention_pool_v.norm.weight", "blocks.10.attn._attention_pool_v.norm.bias", "blocks.11.attn._attention_pool_k.pool.weight", "blocks.11.attn._attention_pool_k.norm.weight", "blocks.11.attn._attention_pool_k.norm.bias", "blocks.11.attn._attention_pool_v.pool.weight", "blocks.11.attn._attention_pool_v.norm.weight", "blocks.11.attn._attention_pool_v.norm.bias", "blocks.12.attn._attention_pool_k.pool.weight", "blocks.12.attn._attention_pool_k.norm.weight", "blocks.12.attn._attention_pool_k.norm.bias", "blocks.12.attn._attention_pool_v.pool.weight", "blocks.12.attn._attention_pool_v.norm.weight", "blocks.12.attn._attention_pool_v.norm.bias", "blocks.13.attn._attention_pool_k.pool.weight", "blocks.13.attn._attention_pool_k.norm.weight", "blocks.13.attn._attention_pool_k.norm.bias", "blocks.13.attn._attention_pool_v.pool.weight", "blocks.13.attn._attention_pool_v.norm.weight", "blocks.13.attn._attention_pool_v.norm.bias", "blocks.14.attn._attention_pool_q.pool.weight", "blocks.14.attn._attention_pool_q.norm.weight", "blocks.14.attn._attention_pool_q.norm.bias", "blocks.14.attn._attention_pool_k.pool.weight", "blocks.14.attn._attention_pool_k.norm.weight", "blocks.14.attn._attention_pool_k.norm.bias", "blocks.14.attn._attention_pool_v.pool.weight", "blocks.14.attn._attention_pool_v.norm.weight", "blocks.14.attn._attention_pool_v.norm.bias", "blocks.15.attn._attention_pool_k.pool.weight", "blocks.15.attn._attention_pool_k.norm.weight", "blocks.15.attn._attention_pool_k.norm.bias", "blocks.15.attn._attention_pool_v.pool.weight", "blocks.15.attn._attention_pool_v.norm.weight", "blocks.15.attn._attention_pool_v.norm.bias".

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
datumboxcommented, Jun 21, 2022

@aiot-tech I just noticed that you are not loading the weights properly. The issue is the given checkpoint contains not only the model weights but also other training specific info. Try doing:

model.load_state_dict(torch.load(path)["model_state"], strict=False)
0reactions
aiot-techcommented, Jun 21, 2022

Brilliant!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Import pytorchvideo transformer model - PyTorch Forums
Hi I am trying to import the last MViT model from model zoo with pretrained weights link: Model Zoo and Benchmarks — PyTorchVideo ......
Read more >
What to do when you get an error - Hugging Face Course
Feel free to test it out :) and the first thing you think of is to load the model using the pipeline from...
Read more >
Error to load a pre-trained BERT model
I can understand that schema is not defined before the line, but I cannot find a clew on the article to fix it....
Read more >
Load a pre-trained model from disk with Huggingface ...
Assuming your pre-trained (pytorch based) transformer model is in 'model' folder in your current working directory, following code can load ...
Read more >
Vision Transformer (ViT) - Pytorch Image Models - GitHub Pages
How do I use this model on an image? To load a pretrained model: import timm model = timm.create_model('vit_base_patch16_224', pretrained=True) model.eval().
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found