KeyError: 'ner_crf' in evaluate.py during crossvalidation
See original GitHub issueHello,
Operating system (windows, osx, …): windows 10
Content of model configuration file:
language: "en"
pipeline:
- name: "tokenizer_whitespace"
- name: "ner_crf"
- name: "intent_featurizer_count_vectors"
- name: "intent_classifier_tensorflow_embedding"
intent_tokenization_flag: true
intent_split_symbol: "_"
Issue: I am trying to evaluate model by running command:
python -m rasa_nlu.evaluate --data data/nlu_data.json --config nlu_config.yml --mode crossvalidation --folds 5
However, I got issue:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 857,
in <module>
main()
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 834,
in main
data, int(cmdline_args.folds), nlu_config)
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 720,
in run_cv_evaluation
interpreter, train)
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 684,
in combine_entity_result
current_result = compute_entity_metrics(interpreter, data)
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 777,
in compute_entity_metrics
merged_predictions = merge_labels(aligned_predictions, extractor)
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 307,
in merge_labels
for ap in aligned_predictions]
File "C:\ProgramData\Anaconda3\lib\site-packages\rasa_nlu\evaluate.py", line 307,
in <listcomp>
for ap in aligned_predictions]
KeyError: 'ner_crf'
Issue Analytics
- State:
- Created 5 years ago
- Comments:13 (7 by maintainers)
Top Results From Across the Web
key error not in index while cross validation - Stack Overflow
I have applied svm on my dataset. my dataset is multi-label means each observation has more than one label. while KFold cross-validation it ......
Read more >Evaluation of NER model is producing incorrect results #3224
I executed this command on the train_ner.py script with my training data ... When evaluating the model on the validation data,.
Read more >Repeated k-Fold Cross-Validation for Model Evaluation in ...
The k-fold cross-validation procedure is a standard method for estimating the performance of a machine learning algorithm or configuration on a ...
Read more >Cross-Validation with Code in Python | by Etqad Khan - Medium
Here, we split the dataset into Training and Test Set, generally in a 70:30 or 80:20 ratio. The model is trained on the...
Read more >Model evaluation — Applied Machine Learning in Python
So far our model evaluation was relatively simplistic, using a split into training ... set and the validation set, and evaluate the model...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I’m on Ubuntu 18.04 with python 3.6.
import sklearn_crfsuite
works fine 😃Resolved in PR#1689