parameter `ignore_keys` of `trainer.predict` not accessible in `Trainer` or `TrainingArguments`
See original GitHub issue๐ Feature request
The predict
and evaluate
methods of the Trainer class provide an excellent option of ignore_keys
. Here is a small example:
trainer.predict(dataset, ignore_keys=["ner_loss", "cls_loss", "ner_logits", "cls_logits"])
This option is however, not accessible during the normal setup of defining TrainingArguments
class nor the Trainer
class so the a call to trainer.train()
leads to errors during the mid-training evaluation.
Motivation
I am unable to evaluate the model metrics on the validation set during the training to see if makes sense to continue.
Your contribution
I am happy to make a PR if this is seen as a genuine problem. Like always, may be I am missing something.
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
Trainer - Hugging Face
predict โ Returns predictions (with metrics if labels are available) on a test set. The Trainer class is optimized for Transformers models and...
Read more >HuggingFace Trainer do predictions - Stack Overflow
I've been fine-tuning a Model from HuggingFace via the Trainer -Class. I went through the Training Process via trainer.train() and also ...
Read more >Fine-tuning pretrained NLP models with Huggingface's Trainer
We can define the training parameters in the TrainingArguments and Trainer class as well as train the model with a single command. We...
Read more >Question Answering | transformerlab
This argument is not directly used by :class: ~transformers.Trainer , it's intended to be used by your training/evaluation scripts instead.
Read more >Huggingface is all you need for NLP and beyond | Jarvislabs.ai
Explore how to use Huggingface Datasets, Trainer, Dynamic Padding, ... which might not get noticed quickly because Deep Learning code is the ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
We could add an argument for this (like
ignore_keys_for_eval
) yes. Let me know if you want to tackle this!Sure! Ping me when you open a PR ๐