question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

torch.einsum() compatibility

See original GitHub issue

Hi, i am testing your sample code in attention_augmented_conv.py with: tmp = torch.randn((16, 3, 32, 32)) a = AugmentedConv(3, 20, kernel_size=3, dk=40, dv=4, Nh=2, relative=True) print(a(tmp).shape) But it raises: Traceback (most recent call last): File "attention_augmented_conv.py", line 131, in <module> print(a(tmp).shape) File "/Users/scouly/anaconda3/envs/Pytorch_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "attention_augmented_conv.py", line 44, in forward h_rel_logits, w_rel_logits = self.relative_logits(q) File "attention_augmented_conv.py", line 90, in relative_logits rel_logits_w = self.relative_logits_1d(q, key_rel_w, H, W, Nh, "w") File "attention_augmented_conv.py", line 99, in relative_logits_1d rel_logits = torch.einsum('bhxyd,md->bhxym', q, rel_k) TypeError: einsum() takes 2 positional arguments but 3 were given I’m guessing if it’s caused by the version compatibility issue of pytorch. BTW i am currently using pytorch 0.4.1 on Mac OS

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
SCoulYcommented, Apr 29, 2019

Thanks!

0reactions
QingdaChencommented, Apr 28, 2019

How to change the padding of the convolution layer for example (0)?

Read more comments on GitHub >

github_iconTop Results From Across the Web

torch.einsum — PyTorch 1.13 documentation
torch.einsum ... Sums the product of the elements of the input operands along dimensions specified using a notation based on the Einstein summation...
Read more >
torch.einsum equation works in NumPy but not in Pytorch
Issue description I tried doing batchwise dot product across channels or rather pairwise similarity between all pairs of features for two ...
Read more >
How exactly does torch / np einsum work internally
Does einsum perform all combinations (like the second code), and picks out the relevant values? Sample Code to test: import time import torch ......
Read more >
tf.einsum | TensorFlow v2.11.0
The ellipsis is a placeholder for "whatever other indices fit here". Einsum will broadcast over axes covered by the ellipsis.
Read more >
Pytorch tensor operations - Adrian G
Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found