Is BERT actually good for semantic similarity.. check the example
See original GitHub issueAs per score below , second case is more semantic similar than other one. But in actual it is just the opposite.
>>> model.predict([("he is an indian", "he has indian citizenship")])
array([3.2054904], dtype=float32)
>>> model.predict([("he is an indian", "he is not an indian")])
array([3.590286], dtype=float32)
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
BERT For Measuring Text Similarity - Towards Data Science
That's all for this introduction to measuring the semantic similarity of sentences using BERT — using both sentence-transformers and a lower- ...
Read more >Semantic Similarity with BERT - Keras
Semantic Similarity is the task of determining how similar two sentences are, in terms of what they mean. This example demonstrates the use ......
Read more >Semantic Similarity in Sentences and BERT | Analytics Vidhya
How do BERT and other pretrained models calculate sentence similarity differently and how BERT is the better option among them.
Read more >Measuring Text Similarity Using BERT - Analytics Vidhya
In this article we are going to measure text similarity using BERT. For the task we will be using pytorch a deep learning...
Read more >python - Semantic text similarity using BERT
BERT is trained on a combination of the losses for masked language modeling and next sentence prediction. For this, BERT receives as input ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, The implementation may be correct but does it worth if results are wrong. I belive bert embedding is not made for this purpose. Why to put a code in public which will not give intended results as the name suggests or the Readme page promises. Please correct me if I am wrong on BERT usage for this purpose
On Wed, 24 Jul, 2019, 3:48 AM Andriy Mulyar, notifications@github.com wrote:
@sabirdvd What would be a better approach than using the sentence representation from bert for Semantic Similarity…?