[RFC] Customizable steps of `LightGBMTuner` and `LightGBMTunerCV`
See original GitHub issueMotivation
The steps of LightGBMTuner
are fixed, and users cannot add the parameters to tune. I saw users’ demand to customize the steps.
For example,
- @Amyantis submitted a feature request to change the number of trials at each step in #1550, and
- @kdlin wants to tune additional parameters such as
max_depth
andmax_bin
in the Gitter conversation.
I created this issue to aggregate such user needs, and I’d like to discuss the feasible and user-friendly implementation of customizable steps.
Description
Currently, I do not have any ideas for API design, but let me share some existing attempts.
Approach 1
I tried to extend LightGBMTuner
to add a step to tune max_depth
. See https://colab.research.google.com/drive/1vBfktSi7qXc8T4X6x1ib3IeqIlqcOAOH?usp=sharing.
It was a bit complicated because I needed to modify two classes, i.e., LightGBMTuner
and _OptunaObjective
. I don’t think this is a convenient way to customize LightGBM tuner’s step, and we may need major refactoring of LightGBMTuner
and _OptunaObjective
.
Approach 2
As mentioned in this Gitter comment, @jeffzi created a prototype of StepwiseTuner
, which accepts a dictionary containing step information. It also enables us to create a stepwise tuner for other libraries such as XGBoost and CatBoost.
Issue Analytics
- State:
- Created 3 years ago
- Comments:16 (5 by maintainers)
Top GitHub Comments
Some time ago I prototyped a
StepwiseTuner
class that accepts a list of named Steps in its init.It’s essentially an abstraction of
LightGBMTuner
that needs to be specialized in a dedicated subclass. The shared logic of looping through the steps, along with keeping track of the global time and trial budgets across steps is implemented inStepwiseTuner.tune()
.This approach is flexible. The steps and their order can be customized, the time and trial budget per step as well.
Here is a draft of the public interfaces:
Internally,
StepwiseTuner.tune()
creates a study for each step and an objective function based onBasestep.suggest()
(similar toLightGBMTuner
implementation). Subclasses ofStepwiseTuner
, such asXGBoostTuner
orLightGBMTuner
, deal with the details of the integration. Crucially, they must provide to their parentStepwiseTuner
an objective function with the signatureCallable[[optuna.Trial, Dict[str, Any]], float]
. Basically this objective abstracts away the details of the external library.StepwiseTuner.tune()
returns the best params and best values but those should be class attributes to followLightGBMTuner
interface.Usage example:
For convenience, default steps can be offered to the end-users. For example (xgboost):
@toshihikoyanase I’ve submitted the PR.