Optuna prunes too aggressively, when objective is jittery (early stopping patience needed)
See original GitHub issueOptuna Pruners should have a parameter early_stopping_patience
(or checks_patience
), which defaults to 1. If the objective hasn’t improved over the last early_stopping_patience
checks, then (early stopping) pruning occurs.
Motivation
My objective function is jittery. So Optuna is very aggressive and prunes trials when the objective increases slightly due to jitter.
This means Optuna is worse than hand tuning, because it prefers only optimization paths that are not jittery and monotonically decrease, which aren’t always the ones that converge to the lowest optimum. In some problem settings, it is unusable.
Description
What would be better is an optional parameter that says how many checks must occur without improvement, in order for early stopping pruning to happen.
Optuna Pruners should have a parameter early_stopping_patience
(or checks_patience
), which defaults to 1. If the objective hasn’t improved over the last early_stopping_patience
checks, then (early stopping) pruning occurs.
More specifically, the objective at a particular check for the purposes of pruning, reporting, etc. should be the min (or max) objective observed over the last early_stopping_patience
checks.
Alternatives (optional)
I have tried interval_steps=10
but it doesn’t help. Because if the tenth step is the one that has jitter, it gets pruned.
Additional context (optional)
Pytorch early stopping has a patience
parameter that is similar to the one I propose.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:7
- Comments:41 (27 by maintainers)
Top GitHub Comments
@turian
PatiencePruner
is available on the master branch 😃@nzw0301 This is looking much more like I would expect! Looks very nice