question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error using visualize_attention.py. The size of tensor a (3234) must match the size of tensor b (3181) at non-singleton dimension 1

See original GitHub issue

Hi all, I am trying to execute visualize_attention.py with default pretrained weights on my own image as below

!python visualize_attention.py --image_path 'test/finalImg_249.png'

I get size mistamatch error. Could you please let me know what changes needs to be done here?

Error stack trace:

Please use the --pretrained_weights argument to indicate the path of the checkpoint to evaluate. Since no pretrained weights have been provided, we load the reference pretrained DINO weights. /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3458: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.

“See the documentation of nn.Upsample for details.”.format(mode) /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3503: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. "The default behavior for interpolate/upsample with float scale_factor changed "

Traceback (most recent call last): File “visualize_attention.py”, line 162, in <module> attentions = model.forward_selfattention(img.to(device)) File “~/dino/vision_transformer.py”, line 246, in forward_selfattention x = x + pos_embed

RuntimeError: The size of tensor a (3234) must match the size of tensor b (3181) at non-singleton dimension 1

Image details: import cv2 img = cv2.imread(‘finalImg_249.png’) print (img.shape) #output: (427, 488, 3)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:20 (9 by maintainers)

github_iconTop GitHub Comments

github_iconTop Results From Across the Web

python - RuntimeError: The size of tensor a (133) must match ...
Well, the error is because the nn.MSELoss() and nn.CrossEntropyLoss() expect different input / target combinations. You cannot simply change ...
Read more >
Trainer RuntimeError: The size of tensor a (462) must match ...
Hi, I am finetuning Whisper and run into a trainer issue and ... must match the size of tensor b (448) at non-singleton...
Read more >
RuntimeError: The size of tensor a (224) must match the size ...
Hello, I'm training a vision transformer on the custom dataset for regression purpose. The predicted size resulted from the network torch.
Read more >
RuntimeError: The size of tensor a (115) must match the size ...
... and keep receiving the error 'RuntimeError: The size of tensor a ... match the size of tensor b (64) at non-singleton dimension...
Read more >
pytorch_basics
[ 6, 8, 10]]). Got exception: 'The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found