question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Poor evaluation results on a dataset

See original GitHub issue

Hello @nreimers,

Thank you for amazingly simple to use code!

I’m trying to fine-tune the model ‘bert-base-nli-mean-tokens’ model to match user searches to job titles.

My training dataset consists of 934791 pairs of sentences and score for each pair, so I use the example for fine-tuning for the STS Benchmark (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training_stsbenchmark_continue_training.py)

I train using the parameters from example (4 epochs with batch size 16). The evaluation results I’m getting after training are the following:

2020-01-20 15:20:03 - Cosine-Similarity :	Pearson: 0.0460	Spearman: 0.1820
2020-01-20 15:20:03 - Manhattan-Distance:	Pearson: -0.0294	Spearman: 0.0167
2020-01-20 15:20:03 - Euclidean-Distance:	Pearson: -0.0295	Spearman: 0.0169
2020-01-20 15:20:03 - Dot-Product-Similarity:	Pearson: 0.0468	Spearman: 0.1853
0.18530780992075702

Which I believe means that the model has not learned useful embeddings.

Here is how my dataset looks like for one search phrase: image

The distribution of the score column is: image So I would consider this as a balanced dataset.

What would you recommend as the next steps to improve the results?

  1. Continue the training until similarity criteria reach 0.85 as for STS example?
  2. Modify the model adding some layer for search_input encoding (as you recommend here: https://github.com/UKPLab/sentence-transformers/issues/96#issuecomment-574051231)

Any other advice would be helpful.

Thank you!

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
nreimerscommented, Jan 21, 2020

Hi @anatoly-khomenko I’m afraid that creating an asymmetric structure is not straight-forward, as the architecture was more designed for symmetric network structures.

What you can do is to create a new layer derived from the dense models.Dense module (let’s call it AsymmetricDense). Your architecture will look like this: Input -> BERT -> mean pooling -> AsymmetricDense

In AsymmetricDense, in the forward method, you have a special routine depending on a flag of your input:

if features['input_type'] == 'document':
     features.update({'sentence_embedding': self.activation_function(self.linear(features['sentence_embedding']))})
return features

Then you need a special reader. For your queries, you set the feature[‘input_type’] to ‘query’, for your documents (your titles), you set it to feature[‘input_type’] = ‘document’.

The dense layer will then only be applied to input text with input_type==document.

2reactions
nreimerscommented, Jan 20, 2020

Hi @anatoly-khomenko Some notes:

  1. The STS dataset has scores between 0 - 5. Hence, the STS reader normalizes the scores by dividing them by 5 so that you get scores between 0-1. If you haven’t disabled it (you can pass a false as a parameter), your scores would be normalized to the rand 0 - 0.1 (I think 0.5 is your highest score?)

  2. You have an asymmetric use case: It makes a difference what text the query is and what text the response is, i.e., swapping both would make a difference in your case. The models here are optimized for symmetric use case, i.e, sim(A, B) = sim(B, A)

For your task, using an asymmetric structure could be helpful. You add one (or more) dense layers to one part of the network. So for example A -> BERT -> Mean-Pooling -> Output B -> BERT -> Mean-Pooling -> Dense -> Output

Even if A and B are identical, B would get a different sentence embedding because one is the query and the other is the document.

  1. You search input appear rather short. Contextualized word embeddings (like ELMo and BERT) show some issues if you have only single terms or when you match for single terms. The issue is the following. Document: My cat is black Search query: cat

Here, cat in the document and cat in the search query would get different vector representations, making it more challenging to match them. Non-contenxualized word embeddings like GloVe are easier to use in this case, as ‘cat’ is always mapped to the same point in vector space.

  1. Your score distribution looks quite skewed. Mathematically, cosine-similarity creates a vector space where the similarities are more or less equally distributed. This is especially true if you have a symmetric network structure. With an asymmetric structure, you can counter this issue a little bit. But in general I could image that modeling a skewed score distribution with cosine-similarity is quite hard.
Read more comments on GitHub >

github_iconTop Results From Across the Web

The Model Performance Mismatch Problem (and what to do ...
Evaluate the chosen model on another set of data. For example, some ideas to try include: Try a k-fold cross-validation evaluation of the...
Read more >
Poor classification performance on a balanced dataset
This is really bad for a balanced dataset. This is just one part of the problem though. The models (SVMs and RFs) are...
Read more >
Considerations for model evaluation - Hugging Face
If you overfit on your training data the evaluation results on that split will look great but the model will perform poorly on...
Read more >
Anomaly Detection — How to Tell Good Performance from Bad
Anomalies were manually labelled prior to performance evaluation — this was stored as the ground truth dataset, based on the analyst's ...
Read more >
Poor performance of a deep learning model - RPubs
Supervised machine learning requires the presence of labeled data, i.e. a target (outcome) variable. A dataset for which the target variable ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found