whats the most idiomatic way to track the best validation accuracy and then fire events
See original GitHub issueFollowing the common pattern
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
print(evaluator.run(evaluation_loader)) # returns state
an evaluator is instantiated once but its run method is called for one epoch every time the trainer has completed an epoch and the evaluator.state gets reset after every epoch.
What is the idiomatic way to fire a custom Event in case the current evaluation accuracy surpasses all evaluation accuracies before?
I need keep state of evaluators best accuracy across runs, but its state member is nulled upon calling run, i.e. a new epoch. Is the best way here to store this value within the engine itself or rather define a whole new class like its done in https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/custom_events.py
I read #627 but there the evaluator result is stored within the state of the trainer, which seems like an ugly hack
Issue Analytics
- State:
- Created 4 years ago
- Comments:7
Top GitHub Comments
thank you for your detailed reply. I appreciate it. Thats one solution I came up with. Another ist, that I wrote a custom class and after overloading its call operator, attached it to the event epoch.completed of the evaluator that keeps state and does all the work, much like ModelCheckpoint does.
by the way, is it advised to fire custom events in handler functions within client code or should those be restricted to library code? In other words, should decorated functions be allowed to fire custom events which have been registered before?
thank you