None and false prediction score when evaluating my custom model using GoldParse
See original GitHub issueHello there,
After training my custom ner spacy model, i wanted to calculate the score of prediction but i get the same false result for each prediction :
scorer = Scorer()
doc_gold_text = model.make_doc(labeled_data[1][0])
gold = GoldParse(doc_gold_text, labeled_data[1][1]['entities'])
pred_value = model(labeled_data[1][0])
scorer.score(pred_value, gold)
print(model.evaluate([(pred_value,gold)]).scores)
give me :
{'uas': 0.0, 'las': 0.0, 'las_per_type': {'': {'p': 0.0, 'r': 0.0, 'f': 0.0}}, 'ents_p': 0.0, 'ents_r': 0.0, 'ents_f': 0.0, 'ents_per_type': {}, 'tags_acc': 0.0, 'token_acc': 100.0, 'textcat_score': 0.0, 'textcats_per_cat': {}}
Looking to the GoldParse Heads, i got a list of None :
print(gold.heads)
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, ...
My test input is :
(text1,{'entities': [(0,31, 'Titre'),
(98,106, 'Label'),(122,155, 'Valeur'),
(193,210, 'Label'),(217,224, 'Valeur'),
(291,295, 'Label'),(315,354, 'Valeur'),
(422,428, 'Label'),(446,466, 'Valeur'),
(504,520, 'Label'),(528,542, 'Valeur'),
(608,623, 'Label'),(632,637, 'Valeur'),
(675,687, 'Label'),(699,704, 'Valeur'),
(768,785, 'Label'),(792,807, 'Valeur'),
(845,860, 'Label'),(869,884, 'Valeur'),
(954,975, 'Label'),(978,993, 'Valeur'),
(1059,1074, 'Label'),(1083,1087, 'Valeur'),
(1197,1203, 'Label'),(1221,1224, 'Valeur'),
(3301,3323, 'Label'),(3325,3361, 'Valeur'),
(3428,3445, 'Label'),(3452,3502, 'Valeur'),
(4748,4760, 'Label'),(4772,4819, 'Valeur'),
(8361,8370, 'Label'),(8384,8442, 'Valeur'),
(8514,8521, 'Label'),(8538,8549, 'Valeur'),
(8618,8627, 'Label'),(8641,8692, 'Valeur')]})
-
Does anyone have an idea about that?
-
And I was wondering, is there a parameter during training so that the ner model change his weight according to a validation set?
Your Environment
- Operating System: Darwin-19.3.0-x86_64-i386-64bit
- Python Version Used: 3.7.6
- spaCy Version Used: 2.2.4
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (5 by maintainers)
Top GitHub Comments
@svlandeg Thank you for your quick answer ! 😉
So the output is the nlp trained model 😃
@adrianeboyd, I added the entities param name (it doesn’t change the metric values) and gold.ner give me that :
['B-Titre', 'I-Titre', 'I-Titre', 'I-Titre', 'L-Titre', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'U-Label', 'O', 'B-Valeur', 'I-Valeur', 'I-Valeur', 'I-Valeur', 'I-Valeur', 'I-Valeur', 'I-Valeur', 'I-Valeur', 'L-Valeur', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Label', 'L-Label', 'O', 'U-Valeur', 'O', 'O' 'O', 'O', 'O', 'O',....
Again , thank you for your time 😃
Devly yours 💻
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.