question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LightGBMTunerCV not working for regression objective

See original GitHub issue

The script https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_cv.py runs just fine as it is. However, I get a KeyError: 'mse-mean' if I change the objective to regression and metric to mse. Similar erro happens to other metrics as well when the objective is set to regression.

Environment

  • Optuna version: 2.0.0
  • Python version: 3.7
  • OS: MacOS Catalina

Error messages, stack traces, or logs

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-11-7753103b8251> in <module>
     15 )
     16 
---> 17 tuner.run()
     18 
     19 print("Best score:", tuner.best_score)

/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in run(self)
    461         self.sample_train_set()
    462 
--> 463         self.tune_feature_fraction()
    464         self.tune_num_leaves()
    465         self.tune_bagging()

/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in tune_feature_fraction(self, n_trials)
    486 
    487         sampler = optuna.samplers.GridSampler({param_name: param_values})
--> 488         self._tune_params([param_name], len(param_values), sampler, "feature_fraction")
    489 
    490     def tune_num_leaves(self, n_trials: int = 20) -> None:

/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _tune_params(self, target_param_names, n_trials, sampler, step_name)
    567                 timeout=_timeout,
    568                 catch=(),
--> 569                 callbacks=self._optuna_callbacks,
    570             )
    571 

/usr/local/lib/python3.7/site-packages/optuna/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
    290             if n_jobs == 1:
    291                 self._optimize_sequential(
--> 292                     func, n_trials, timeout, catch, callbacks, gc_after_trial, None
    293                 )
    294             else:

/usr/local/lib/python3.7/site-packages/optuna/study.py in _optimize_sequential(self, func, n_trials, timeout, catch, callbacks, gc_after_trial, time_start)
    652                     break
    653 
--> 654             self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial)
    655 
    656             self._progress_bar.update((datetime.datetime.now() - time_start).total_seconds())

/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial_and_callbacks(self, func, catch, callbacks, gc_after_trial)
    683         # type: (...) -> None
    684 
--> 685         trial = self._run_trial(func, catch, gc_after_trial)
    686         if callbacks is not None:
    687             frozen_trial = copy.deepcopy(self._storage.get_trial(trial._trial_id))

/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial(self, func, catch, gc_after_trial)
    707 
    708         try:
--> 709             result = func(trial)
    710         except exceptions.TrialPruned as e:
    711             message = "Trial {} pruned. {}".format(trial_number, str(e))

/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in __call__(self, trial)
    302         cv_results = lgb.cv(self.lgbm_params, self.train_set, **self.lgbm_kwargs)
    303 
--> 304         val_scores = self._get_cv_scores(cv_results)
    305         val_score = val_scores[-1]
    306         elapsed_secs = time.time() - start_time

/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _get_cv_scores(self, cv_results)
    292 
    293         metric = self._get_metric_for_objective()
--> 294         val_scores = cv_results["{}-mean".format(metric)]
    295         return val_scores
    296 

KeyError: 'mse-mean'

Steps to reproduce

  1. Run this script with objective = regression and metric = mse.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:14 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
thigm85commented, Aug 11, 2020

@toshihikoyanase I will definitely submit a PR when I make code changes. However, I was referring to general development questions not related to my code changes.

For example, yesterday I cloned the repo and tried to run circleci build --job tests-python37. Almost all tests passed except two tests related to catalyst:

Skjermbilde 2020-08-11 kl  10 15 26

Skjermbilde 2020-08-11 kl  10 16 14

I guess this mean that I need to install catalyst but maybe this should then be included in the dependencies.

Anyway, my question was if there was a channel dedicated for this kind of questions that are raised when one start to interact with the code from a development point of view.

1reaction
thigm85commented, Aug 6, 2020

Yes, extending the _ALIAS_METRIC_LIST would be my first thought. Not sure yet of a better way.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using optuna LightGBMTunerCV as starting point for further ...
I'm trying to use LightGBM for a regression problem (mean absolute error/L1 - or similar like Huber ...
Read more >
optuna.integration.lightgbm.LightGBMTunerCV - Read the Docs
LightGBMTunerCV invokes lightgbm.cv() to train and validate boosters while LightGBMTuner invokes lightgbm.train(). See a simple example which optimizes the ...
Read more >
optuna.org - Gitter
As for the Boston dataset, I think it would make sense to use it as the standard regression example. The problem with the...
Read more >
LightGBM & tuning with optuna - Kaggle
No Active Events. Create notebooks and keep track of their status here. addNew Notebook. menu. Skip to content. Kaggle. Create. code. New Notebook....
Read more >
Understanding LightGBM Parameters (and How to Tune Them)
It's been my go-to algorithm for most tabular data problems. ... Adjustments that need to be made for Classification or Regression problems.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found