question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inconsistency in defaults?

See original GitHub issue

I trained dino with default settings (the vit_small arch), and tried to run the video_generation.py script to look at the results. This gave the following error:

Take key teacher in provided checkpoint dict
Traceback (most recent call last):
  File "/usr/src/app/video_generation.py", line 377, in <module>
    vg = VideoGenerator(args)
  File "/usr/src/app/video_generation.py", line 46, in __init__
    self.model = self.__load_model()
  File "/usr/src/app/video_generation.py", line 263, in __load_model
    msg = model.load_state_dict(state_dict, strict=False)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1070, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VisionTransformer:
        size mismatch for pos_embed: copying a param with shape torch.Size([1, 197, 384]) from checkpoint, the shape in current model is torch.Size([1, 785, 384]).
        size mismatch for patch_embed.proj.weight: copying a param with shape torch.Size([384, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([384, 3, 8, 8]).

I think this is caused by training defaulting to patch_size of 16, while video generation defaults to 8. Adding --patch_size 16 to the video_generation command line seems to have fixed it.

(No big deal, of course, but thought I might as well report it.)

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:2
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
mathildecaron31commented, Aug 12, 2022

Thanks @ketil-malde However having left Meta I no longer can accept pull request.

1reaction
ketil-maldecommented, Feb 18, 2022

Just merge this patch? https://github.com/ketil-malde/dino/commit/ce2b20bb3e89c528f1ad256ee64465d027667b50

This is pretty trivial, but if I should do something to facilitate it (put it on a separate branch, create a pull request, whatever), just give me instructions, and I’m happy to oblige.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Measures of inconsistency and defaults - CORE
Measures of inconsistency and defaults ... We introduce a method for measuring inconsistency based on the number of formulas.
Read more >
Measures of inconsistency and defaults - ScienceDirect.com
We introduce a method for measuring inconsistency based on the number of formulas needed for deriving a contradiction. The relationships to previously ...
Read more >
The following packages are causing the ... - GitHub
The following packages are causing the inconsistency: - defaults/win-64::conda-build==3.21.4=py38haa95532_0 #16666.
Read more >
Measures of inconsistency and defaults | Request PDF
We introduce a method for measuring inconsistency based on the number of formulas needed for deriving a contradiction.
Read more >
The environment is inconsistent, please check the package ...
If you install everything to the base env and something gets broken, then probably better is not to install conda at all and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found