question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

finetune with different embedding size

See original GitHub issue

Hi I want to finetune your pretrained model (r50) with embedding size 128. Starting point is your train_softmax.py: CUDA_VISIBLE_DEVICES='0' python -u train_softmax.py --emb-size=128 --pretrained '../models/model-r50-am-lfw/model,0' --network r50 --loss-type 0 --margin-m 0.5 --data-dir ../datasets/faces_ms1m_112x112 --prefix ../models/model-r50-am-lfw/

this results in: mxnet.base.MXNetError: [08:51:39] src/operator/nn/../tensor/../elemwise_op_common.h:123: Check failed: assign(&dattr, (*vec)[i]) Incompatible attr in node at 0-th output: expected [512], got [128]

I have tried to remove the last fc layer (fc1), then add fc7 as done in your code, then freeze all layers except fc7. Still the dimensions don’t match.

Any advise on how to do this? Thanks.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
staceycycommented, Sep 10, 2018

@mpaffolter yes. Actually I have the same question. If there is no validation set, how can we know whether our model is overfitting?

@nttstar Could you please help to answer, thank you. 😃

0reactions
wlwu92commented, Jan 9, 2019

@mpaffolter yes. Actually I have the same question. If there is no validation set, how can we know whether our model is overfitting?

@nttstar Could you please help to answer, thank you. 😃

@mpaffolter @staceycy @nttstar same question. The comparison with state-of-the-art seems unfair.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Fine-tune a pretrained model - Hugging Face
When you use a pretrained model, you train it on a dataset specific to your task. This is known as fine-tuning, an incredibly...
Read more >
How to fine-tune your embeddings for better similarity search
This blog post will share our experience with fine-tuning sentence embeddings on a commonly available dataset using similarity learning.
Read more >
A Simple and Effective Approach for Fine Tuning Pre-trained ...
This work presents a new and simple approach for fine-tuning pretrained word embeddings for text classification tasks. In this approach, the class in...
Read more >
Finetuning Torchvision Models - PyTorch
The other inputs are as follows: num_classes is the number of classes in the dataset, batch_size is the batch size used for training...
Read more >
How does Fine-tuning Word Embeddings work?
Yes, if you feed the embedding vector as your input, you can't fine-tune the embeddings (at least easily). However, all the frameworks ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found