Throw error when retrieving an embedding for inferencing one image
See original GitHub issueStep to reproduce:
model = BYOL(
resnet,
image_size = 256,
hidden_layer = 'avgpool'
)
imgs = torch.randn(1, 3, 256, 256)
projection, embedding = model(imgs, return_embedding = True)
Error found:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in _verify_batch_size(size)
2245 size_prods *= size[i + 2]
2246 if size_prods == 1:
-> 2247 raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
2248
2249
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 4096])
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Embedding an image preprocessing function in a tf.keras model
Step 1: Serializing a randomly selected image from the test set ... Step 5: Export the model and run inference ... We are...
Read more >[EfficientNet] Elements of Image Embedding | Kaggle
In this notebook I will introduce the reader to some useful aspects of image embedding. Hopefully it will lead to succesful submissions.
Read more >Pipelines - Hugging Face
The pipeline accepts either a single image or a batch of images, which must then be passed as a string. Images in a...
Read more >How to Use Word Embedding Layers for Deep Learning with ...
The output of the Embedding layer is a 2D vector with one embedding for each word in the input sequence of words (input...
Read more >Common Data Formats for Inference - Amazon SageMaker
Amazon SageMaker algorithms accept and produce several different MIME types for the HTTP payloads used in retrieving online and mini-batch predictions.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@MimiCheng yup, all you have to do is to do
model.eval()
first and then it should work!@lucidrains thanks for you explanation. It works with
imgs = torch.randn(2, 3, 256, 256)
. Just wondering is there a way to retrieve embedding for only one new coming image after training? I would like to use that embedding for inference. Should I modify the code skipping the projection layer in order to make it works? Thanks!