question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when using custom metrics in optuna.integration.lightgbm

See original GitHub issue

Expected behavior

I got an error when using custom metrics in optuna.integration.lightgbm.

Environment

  • Optuna version: 1.5.0
  • Python version: 3.7.7
  • OS: MacOS Catalina 10.15.5
  • Anaconda
  • (Optional) Other libraries and their versions: conda_list.txt

Error messages, stack traces, or logs

 Early stopping, best iteration is:
 [113]	training's custom_metrics: 0.73954	valid_1's custom_metrics: 2.25621

KeyError                                  Traceback (most recent call last)
<ipython-input-45-c87e652c308e> in <module>
      1 best_params = {}
      2 model = lgb_tuner.train(params, train_set, num_boost_round = 2500, early_stopping_rounds = 50,
----> 3                   valid_sets = [train_set, val_set], verbose_eval = 100, feval= custom_metrics)

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/_experimental.py in new_func(*args, **kwargs)
     62                 )
     63 
---> 64                 return func(*args, **kwargs)  # type: ignore
     65 
     66             return new_func

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/__init__.py in train(*args, **kwargs)
     44 
     45     auto_booster = LightGBMTuner(*args, **kwargs)
---> 46     auto_booster.run()
     47     return auto_booster.get_best_booster()

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in run(self)
    483         self.sample_train_set()
    484 
--> 485         self.tune_feature_fraction()
    486         self.tune_num_leaves()
    487         self.tune_bagging()

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in tune_feature_fraction(self, n_trials)
    511             warnings.simplefilter("ignore", category=optuna.exceptions.ExperimentalWarning)
    512             sampler = optuna.samplers.GridSampler({param_name: param_values})
--> 513         self.tune_params([param_name], len(param_values), sampler, "feature_fraction")
    514 
    515     def tune_num_leaves(self, n_trials: int = 20) -> None:

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in tune_params(self, target_param_names, n_trials, sampler, step_name)
    862 
    863         objective = super(LightGBMTuner, self).tune_params(
--> 864             target_param_names, n_trials, sampler, step_name
    865         )
    866 

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in tune_params(self, target_param_names, n_trials, sampler, step_name)
    595                     timeout=_timeout,
    596                     catch=(),
--> 597                     callbacks=self._optuna_callbacks,
    598                 )
    599             except ValueError:

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
    337             if n_jobs == 1:
    338                 self._optimize_sequential(
--> 339                     func, n_trials, timeout, catch, callbacks, gc_after_trial, None
    340                 )
    341             else:

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/study.py in _optimize_sequential(self, func, n_trials, timeout, catch, callbacks, gc_after_trial, time_start)
    680                     break
    681 
--> 682             self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial)
    683 
    684             self._progress_bar.update((datetime.datetime.now() - time_start).total_seconds())

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/study.py in _run_trial_and_callbacks(self, func, catch, callbacks, gc_after_trial)
    711         # type: (...) -> None
    712 
--> 713         trial = self._run_trial(func, catch, gc_after_trial)
    714         if callbacks is not None:
    715             frozen_trial = copy.deepcopy(self._storage.get_trial(trial._trial_id))

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/study.py in _run_trial(self, func, catch, gc_after_trial)
    732 
    733         try:
--> 734             result = func(trial)
    735         except exceptions.TrialPruned as e:
    736             message = "Setting status of trial#{} as {}. {}".format(

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in __call__(self, trial)
    247         booster = lgb.train(self.lgbm_params, self.train_set, **self.lgbm_kwargs)
    248 
--> 249         val_score = self._get_booster_best_score(booster)
    250         elapsed_secs = time.time() - start_time
    251         average_iteration_time = elapsed_secs / booster.current_iteration()

~/opt/anaconda3/envs/dev-python37/lib/python3.7/site-packages/optuna/integration/lightgbm_tuner/optimize.py in _get_booster_best_score(self, booster)
    119             raise NotImplementedError
    120 
--> 121         val_score = booster.best_score[valid_name][metric]
    122         return val_score
    123 

KeyError: 'None'

Steps to reproduce

1.create custom_metric 2.set params : params = {‘metric’ = ‘None’, …} 3.optuna.integration.lightgbm.train(params = params, feval = custom_metrics, …)

Reproducible examples (optional)

# python code

Additional context (optional)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
mattiasu96commented, May 16, 2021

Is this problem fixed in last releases of optuna? I have the same problem with my code.

import optuna.integration.lightgbm as lgb_sequential
from sklearn.metrics import average_precision_score

def tune_lightGBM_sequential(X_train, X_val, y_train, y_val):
    
    def calculate_ctr(gt):
        positive = len([x for x in gt if x == 1])
        ctr = positive/float(len(gt))
        return ctr

    def compute_rce(preds, train_data):
        gt = train_data.get_label()
        cross_entropy = log_loss(gt, preds)
        data_ctr = calculate_ctr(gt)
        strawman_cross_entropy = log_loss(gt, [data_ctr for _ in range(len(gt))])
        rce = (1.0 - cross_entropy/strawman_cross_entropy)*100.0
        return ('rce', rce, True)

    def compute_avg_precision(preds, train_data):
        gt = train_data.get_label()
        avg_precision= average_precision_score(gt, preds)
        return('avg_precision', avg_precision, True)
    
    params = {
        "objective": "binary",
        "metric": "custom",
        "boosting_type": "gbdt",
        "verbose" : 2
    }
    
    dtrain = lgb_sequential.Dataset(X_train, label=y_train)
    dval = lgb_sequential.Dataset(X_val, label=y_val)
    
    print('Starting training lightGBM sequential')
    model = lgb_sequential.train(
        params, dtrain, valid_sets=[dtrain, dval], verbose_eval=True,num_boost_round =2, early_stopping_rounds=100, feval = [compute_rce, compute_avg_precision]
    )
    
    return model.params

This is a minimal reproducible code, you have just to pass the input dataset (you can use train_test_split by scikit as an example). I am using my two custom metrics and they work fine inside the lightGBM training, but when it comes to select the best model on the trials, optuna fails with the following error:

[W 2021-05-16 15:56:48,759] Trial 0 failed because of the following error: KeyError(‘custom’) Traceback (most recent call last): File “C:\Users\Mattia\anaconda3\envs\rec_sys_challenge\lib\site-packages\optuna_optimize.py”, line 217, in _run_trial value_or_values = func(trial) File “C:\Users\Mattia\anaconda3\envs\rec_sys_challenge\lib\site-packages\optuna\integration_lightgbm_tuner\optimize.py”, line 251, in call val_score = self._get_booster_best_score(booster) File “C:\Users\Mattia\anaconda3\envs\rec_sys_challenge\lib\site-packages\optuna\integration_lightgbm_tuner\optimize.py”, line 118, in _get_booster_best_score val_score = booster.best_score[valid_name][metric] KeyError: ‘custom’

Any solution?

I tried with different approaches as mentioned also here: https://github.com/optuna/optuna/issues/861 but I haven’t been able to solve the problem.

I would like to use my custom metrics both for early stopping inside the lightGBM training and the best trial selection.

The above shown code works fine for the early stopping part, but fails when optuna decides which one is the best trial.

1reaction
smlycommented, Jun 15, 2020

Thank you for reporting this issue! I created a workaround patch to address this. I hope this helps. https://gist.github.com/smly/33036bb6b4833865854d94072f2c2902

NOTE: LightGBM Tuner with this patch doesn’t exactly match the original LigthGBM spec, so we need to note the expected behavior of Tuner into the documentation.

Read more comments on GitHub >

github_iconTop Results From Across the Web

optuna.integration.lightGBM custom optimization metric
I am trying to optimize a lightGBM model using optuna. ... the best trial based on my custom metrics, in fact, I get...
Read more >
optuna/optuna - Gitter
Optuna's SuccessiveHalving (and Hyperband) algorithm are customized from the original algorithms because Optuna cannot suspend and resume trials. It takes ...
Read more >
Optuna - neptune.ai documentation
Working with Optuna#. Open in Colab. Custom dashboard displaying metadata logged with Optuna. Optuna is an open-source hyperparameter optimization framework ...
Read more >
Optuna tutorial for hyperparameter optimization - Kaggle
When I try building a model (XGBoost, LightGBM, CatBoost, Neural Network etc...), I always face an issue of how to tune these hyperparameters?...
Read more >
Metrics — Ray 2.2.0 - the Ray documentation
Supports custom metrics APIs that resemble Prometheus metric types. ... If you are using mac, you may receive an error at this point...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found