question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

whats the most idiomatic way to track the best validation accuracy and then fire events

See original GitHub issue

Following the common pattern

@trainer.on(Events.EPOCH_COMPLETED)
    def log_validation_results(trainer):
        print(evaluator.run(evaluation_loader))  # returns state

an evaluator is instantiated once but its run method is called for one epoch every time the trainer has completed an epoch and the evaluator.state gets reset after every epoch.

What is the idiomatic way to fire a custom Event in case the current evaluation accuracy surpasses all evaluation accuracies before?

I need keep state of evaluators best accuracy across runs, but its state member is nulled upon calling run, i.e. a new epoch. Is the best way here to store this value within the engine itself or rather define a whole new class like its done in https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/custom_events.py

I read #627 but there the evaluator result is stored within the state of the trainer, which seems like an ugly hack

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7

github_iconTop GitHub Comments

1reaction
CDitzelcommented, Feb 6, 2020

thank you for your detailed reply. I appreciate it. Thats one solution I came up with. Another ist, that I wrote a custom class and after overloading its call operator, attached it to the event epoch.completed of the evaluator that keeps state and does all the work, much like ModelCheckpoint does.

by the way, is it advised to fire custom events in handler functions within client code or should those be restricted to library code? In other words, should decorated functions be allowed to fire custom events which have been registered before?

0reactions
CDitzelcommented, Feb 14, 2020

thank you

Read more comments on GitHub >

github_iconTop Results From Across the Web

Validation accuracy less than training accuracy (with no sigh ...
The training accuracy is similar to the validation accuracy but both are less than the training accuracy. My question is, what is causing...
Read more >
Cross Validation Explained: Evaluating estimator performance.
Cross Validation is a very useful technique for assessing the effectiveness of your model, particularly in cases where you need to mitigate over ......
Read more >
Use Early Stopping to halt the training of neural networks at ...
Use True Validation Set: Update the instance to split the training set into train and validation sets, then assess the model on the...
Read more >
Validation and Testing accuracy widely different - Stack Overflow
Use cross-validation to check, if the test accuracy is always lower than the validation accuracy, or if they just generally differ a lot...
Read more >
4 ways to improve your TensorFlow model - KDnuggets
Here we can see that validation accuracy is 97%, which is quite good. Let's plot for more intuition. Here we can see that...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found