Loading the saved model for testing, resets some of the layers.
See original GitHub issueAfter training the model and saving it, when the model is loaded again for additional evaluation the accuracy is significantly lower than the one that is reported right after training the model. It seems that the model does not load some layers, possibly the softmax layer, or am I loading it wrong?
save_path = "path to the saved model "
reader= "our reader"
model = SentenceTransformer(model_name)
train_loss = losses.SoftmaxLoss(model=model, sentence_embedding_dimension=model.get_sentence_embedding_dimension(), num_labels=train_num_labels)
test_data = SentencesDataset(examples=reader.get_examples('test.tsv'), model=model, shorten=True)
test_dataloader = DataLoader(test_data, shuffle=False, batch_size=batch_size)
evaluator = LabelAccuracyEvaluator(test_dataloader, softmax_model=train_loss)
model.evaluate(evaluator)
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (2 by maintainers)
Actually
train_loss.classifier=torch.load(os.path.join(model_save_path,"2_Softmax/pytorch_model.bin")) doesn't work,
rather this workstrain_loss=torch.load(os.path.join(model_save_path,"2_Softmax/pytorch_model.bin"))
add this to the end of the fit function: example :
torch.save(SOFTMAX_LAYER,os.path.join(model_save_path,"2_Softmax/pytorch_model.bin"))
and load before the evaluator
train_loss.classifier=torch.load(os.path.join(model_save_path,"2_Softmax/pytorch_model.bin"))