RuntimeError: stack expects each tensor to be equal size, but got [n, 768] at entry 0 and [m, 768] at entry 1
See original GitHub issueHi,
I was using the ‘token_embeddings’ feature of the sentence transformers and ran into this error. I have passed a list of sentences to the model.encode() for generating token_embeddings. Are the sentences not being padded before getting converted into token_embeddings? Can you suggest a workaround to this?
b_emb = model.encode(arr[i : i + batch_size], output_value='token_embeddings', convert_to_tensor=True, is_pretokenized=True, show_progress_bar=False)
This is the exact code I am using.
Thanks
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (1 by maintainers)
Top Results From Across the Web
RuntimeError: stack expects each tensor to be equal size, but ...
Im trying to implement pretrained resnet50 on a image classification task with 42 labels and received this error.
Read more >Pytorch expects each tensor to be equal size - Stack Overflow
As per PyTorch Docs about torch.stack() function, it needs the input tensors in the same shape to stack. I don't know how will...
Read more >stack expects each tensor to be equal size, but got [3, 1088 ...
Pytorch has no way to tell how to stack images with different sizes. One workaround would be to set batch_size to 1 (in...
Read more >Stack expects each tensor to be equal size, but got [163, 256 ...
Hi, I am working with the OAI MRI dataset for knee osteoarthritis classification. Each one of 435 MRIs I got has to be...
Read more >stack expects each tensor to be equal size, but got [3, 224 ...
RuntimeError : stack expects each tensor to be equal size, but got [456] at entry 0 and [470] at entry 1 . I...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes, it is working now. Thank you! @jicksonp
@nandinib1999 You can fix the error by changing below line
b_emb = model.encode(arr[i : i + batch_size], output_value='token_embeddings', convert_to_tensor=True, is_pretokenized=True, show_progress_bar=False)
with
b_emb = model.encode(arr[i: i + batch_size], output_value='token_embeddings', convert_to_tensor=True, is_pretokenized=True, show_progress_bar=False, batch_size=batch_size)