question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can I see evaluation metrics during training and send them to W&B?

See original GitHub issue

I’ve been having a great time playing with the library, nice work!

I was wondering if I’m doing something wrong in this gist? https://gist.github.com/galtay/10852bb03b354b2562997973bc29c679

I’m hoping to monitor metrics like “roc_auc_score” during training (and hopefully send them to a weights and biases project). When I run that code I see the “Running loss” printed out, but not the extra metrics I specified. Is there a way to log these to the screen or file or wandb?

I do get the metrics in the model.eval_model(eval_df, **eval_metrics) call, but not during training. Also it would be nice if metrics that take predictions instead of probabilities (like sklearn.metrics.f1_score) could be calculated using the user defined per class threshold. https://github.com/ThilinaRajapakse/simpletransformers#special-attributes

great package, thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
galtaycommented, May 6, 2020

PR is now merged, thanks @ThilinaRajapakse !

Read more comments on GitHub >

github_iconTop Results From Across the Web

Are evaluation metrics computed on training dataset?
It depends on what you mean by "evaluation metrics". If you mean "the metrics calculated to evaluate the model", then those are the...
Read more >
How to Track Machine Learning Model Metrics in Your Projects
You may want to keep track of evaluation metrics after each iteration both for the training and validation set to see whether your...
Read more >
How to view conversational language understanding models ...
Go to your project page in Language Studio. · Select Model performance from the menu on the left side of the screen. ·...
Read more >
Training and evaluation with the built-in methods - TensorFlow
This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation (such as Model.fit ...
Read more >
Weights & Biases – Developer tools for ML
WandB is a central dashboard to keep track of your hyperparameters, system metrics, and predictions so you can compare models live, and share...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found