F1 score of 0 for character level sequence labelling task
See original GitHub issueTraining data looks like:
and training results look like:
Also worth adding that whenever I add a new period at the end of each character sequence, it never begins training and stops at Epoch 1/15
with no progress bar. I’ll be happy to share training data with you if that will help reproduce the issue.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:5 (1 by maintainers)
Top Results From Across the Web
How to compute f1 score for named-entity recognition in Keras
In named-entity recognition, f1 score is used to evaluate the performance of trained models, especially, the evaluation is per entity, not token ...
Read more >Empower Sequence Labeling with Task-Aware Neural ... - arXiv
For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71±0.10...
Read more >The AI community building the future. - Hugging Face
It has been shown to correlate with human judgment on sentence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure ...
Read more >An In-Depth Tutorial on the F-Score For NER | by Yousef Nami
However, while its implementation in classic classification tasks is relatively straightforward, it is far more involved in Named Entity Recognition (NER) where ...
Read more >Sequence Tagging with Tensorflow - Guillaume Genthial blog
GloVe + character embeddings + bi-LSTM + CRF for Sequence Tagging (Named Entity ... The best model achieves in average an F1 score...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I think you cannot just use any tag label schema and expect evaluation function to work correctly, without editing anago.metrics.get_entities - it expects BILO.
Not use BILO. Use IOB2.
Because anago ver0.0.5 assumes that training data is labeled by IOB2.
I will fix the problem in ver 1.0.0.