question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[example scripts] inconsistency around eval vs val

See original GitHub issue
  • val == validation set (split)
  • eval == evaluation (mode)

those two are orthogonal to each other - one is a split, another is a model’s run mode.

the trainer args and the scripts are inconsistent around when it’s val and when it’s eval in variable names and metrics.

examples:

  • eval_dataset but --validation_file
  • eval_* metrics key for validation dataset - why the prediction is then test_* metric keys?
  • data_args.max_val_samples vs eval_dataset in the same line

the 3 parallels:

  • train is easy - it’s both the process and the split
  • prediction is almost never used in the scripts it’s all test - var names and metrics and cl args
  • eval vs val vs validation is very inconsistent. when writing tests I’m never sure whether I’m looking up eval_* or val_* key. And one could run evaluation on the test dataset.

Perhaps asking a question would help and then a consistent answer is obvious:

Are metrics reporting stats on a split or a mode? A. split - rename all metrics keys to be train|val|test B. mode - rename all metrics keys to be train|eval|predict

Thank you.

@sgugger, @patil-suraj, @patrickvonplaten

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:19 (19 by maintainers)

github_iconTop GitHub Comments

2reactions
sguggercommented, Apr 19, 2021

No the key in the dataset dictionary is “validation”, so it should be validation_file.

2reactions
stas00commented, Apr 14, 2021

Awesome! Thank you, @bhadreshpsavani!

So the changes we need are:

  1. use eval instead of val
  2. use predict instead of test

in cl args and variable names in example scripts (only the active ones, please ignore legacy/research subdirs).

I hope this will be a last rename in awhile.

Read more comments on GitHub >

github_iconTop Results From Across the Web

eval() - JavaScript - MDN Web Docs - Mozilla
The eval() function evaluates JavaScript code represented as a string and returns its completion value. The source is parsed as a script.
Read more >
Eval - Neovim docs
When calling a function and invoking a user-defined command, the context for script variables is set to the script where the function or...
Read more >
Vim: eval.txt
In legacy script, when comparing a String with a Number, the String is converted to a Number, and the comparison is done on...
Read more >
eval() vs. Function() in JavaScript
eval () and Function() are powerful tools in JavaScript, but that power comes at a price. For example, arbitrary code can be executed...
Read more >
Kotlin script eval and Java 11 with inline function in the DSL
it throws class version incompatibility exception: javax.script.ScriptException: Cannot inline bytecode built with JVM target 11 into bytecode ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found