Enable setting of training iteration in Trainers
See original GitHub issueIs your feature request related to a problem? Please describe.
Currently, SupervisedTrainer supports controlling the number of iterations by adjusting epoch_length
and max_epochs
. It would be nice to be able to set the number of iterations to be executed directly.
Describe the solution you’d like
Add a n_iterations
argument (or similar) that allows overwriting epoch-based definitions of the training steps number of training steps to be executed.
Note, this should allow the training to resume from the final iteration if n_iterations
is reached. Related to #4554.
Describe alternatives you’ve considered
Live with adjusting epoch_length
and max_epochs
but that seems confusing.
Additional context Add any other context or screenshots about the feature request here.
Issue Analytics
- State:
- Created a year ago
- Comments:6 (3 by maintainers)
Top GitHub Comments
epoch_length
corresponds to number of iterations needed to iterate once of the data (i.e., one epoch). It defaults tolen(train_data_loader)
.@holgerroth we have on
master
and in nightly releasesmax_iters
arg forEngine.run()
, https://pytorch.org/ignite/master/generated/ignite.engine.engine.Engine.html#ignite.engine.engine.Engine.run It probably works as you asked for:However, we haven’t yet released that in stable as there can be issues with how this is saved/loaded in checkpoints etc.
A workaround for stable release to that can be