question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: size mismatch, m1: [32 x 192], m2: [64 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.c:2033

See original GitHub issue

Hello, I am trying to use this with my custom dataset. I am using a dataloader (see here https://github.com/kevinzakka/recurrent-visual-attention/issues/18) though even when I cast my image input to Float32 and get rid of that error, I get a mismatch of tensors while training the network.

Traceback (most recent call last):
  File "main.py", line 49, in <module>
    main(config)
  File "main.py", line 40, in main
    trainer.train()
  File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 168, in train
    train_loss, train_acc = self.train_one_epoch(epoch)
  File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 252, in train_one_epoch
    h_t, l_t, b_t, p = self.model(x, l_t, h_t)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/duygu/recurrent-visual-attention-master/model.py", line 101, in forward
    g_t = self.sensor(x, l_t_prev)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/duygu/recurrent-visual-attention-master/modules.py", line 214, in forward
    phi_out = F.relu(self.fc1(phi))
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/linear.py", line 55, in forward
    return F.linear(input, self.weight, self.bias)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 992, in linear
    return torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [32 x 192], m2: [64 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.c:2033

I can not figure out what goes wrong. Is it about patches or weights? Any insights could be really helpful. Thanks.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:17 (1 by maintainers)

github_iconTop GitHub Comments

5reactions
kevinzakkacommented, Jul 21, 2018

@dearleiii @duygusar Hey guys, I have some free time in the coming week so I’ll try and investigate this bug.

2reactions
ifgovhcommented, Mar 4, 2019

when you give parameters, do --loc_hidden=192 and problem is solved. The reason is the code does not support multiple channels.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to resolve runtime error due to size mismatch in PyTorch?
Your error: size mismatch, m1: [76800 x 256], m2: [784 x 128].
Read more >
[Solved] RuntimeError: size mismatch, m1: [64 x 768], m2
It seems that the number of input features in self.adv_layer or self.aux_layer is defined as 512, while the incoming activation has 768 features ......
Read more >
RuntimeError: size mismatch, m1: [144 x 4], m2: [576 x 64] at ...
The following code gives me this runtime error. I'm expecting the tensor between the conv2d layer and the first linear layer to have...
Read more >
Solving Error: size mismatch, m1: [30 x 2], m2
Linear(*input_dims, 128)) and flatten is just an operation, not a Pytorch module, which means that it receives tensors as parameters, not blocks ...
Read more >
Size mismatch when running GAT with manual entry - Questions
(e.g : RuntimeError: size mismatch, m1: [30 x 8], m2: [16 x 1] at ... i figure maybe that is caused by improper...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found