question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

KeyError: 'binary_logloss'

See original GitHub issue

Environment

  • Optuna version: 2.8.0
  • Python version: 3.8.5
  • OS: macOS Big Sur version 11.4
  • (Optional) Other libraries and their versions:
  • Docker version: 20.10.7
  • Computer: MacBook Air (M1, 2020)
FROM ubuntu:20.04
# update and install
RUN apt-get update && apt-get install -y \
    sudo \
    wget \
    vim \
    libx11-6 \
    libx11-dev \
    graphviz

# install anaconda3
WORKDIR /opt
# download anaconda3 package and install anaconda3
RUN wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh && \
    sh Anaconda3-2020.11-Linux-x86_64.sh -b -p /opt/anaconda3 && \
    rm -f Anaconda3-2020.11-Linux-x86_64.sh

# set path
ENV PATH=/opt/anaconda3/bin:$PATH

# update pip
RUN pip install --upgrade pip && pip install \
    graphviz \
    japanize_matplotlib \
    pydotplus \
    xgboost \
    lightgbm \
    optuna

WORKDIR /
RUN mkdir /work
WORKDIR /work

# execute jupyterlab as default command
CMD ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root", "--LabApp.token=''"]

Error messages, stack traces, or logs

[W 2021-06-25 17:59:03,714] Trial 0 failed because of the following error: KeyError('binary_logloss')
Traceback (most recent call last):
  File "/opt/anaconda3/lib/python3.8/site-packages/optuna/_optimize.py", line 216, in _run_trial
    value_or_values = func(trial)
  File "/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py", line 251, in __call__
    val_score = self._get_booster_best_score(booster)
  File "/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py", line 118, in _get_booster_best_score
    val_score = booster.best_score[valid_name][metric]
KeyError: 'binary_logloss'
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-4-cc91ab7b37c4> in <module>
     17 }
     18 
---> 19 model = lgb_o.train(params,
     20                     trains,
     21                     valid_sets=valids,

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/__init__.py in train(*args, **kwargs)
     33 
     34     auto_booster = LightGBMTuner(*args, **kwargs)
---> 35     auto_booster.run()
     36     return auto_booster.get_best_booster()

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in run(self)
    544         self.sample_train_set()
    545 
--> 546         self.tune_feature_fraction()
    547         self.tune_num_leaves()
    548         self.tune_bagging()

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in tune_feature_fraction(self, n_trials)
    569 
    570         sampler = optuna.samplers.GridSampler({param_name: param_values})
--> 571         self._tune_params([param_name], len(param_values), sampler, "feature_fraction")
    572 
    573     def tune_num_leaves(self, n_trials: int = 20) -> None:

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _tune_params(self, target_param_names, n_trials, sampler, step_name)
    652             _timeout = None
    653         if _n_trials > 0:
--> 654             study.optimize(
    655                 objective,
    656                 n_trials=_n_trials,

/opt/anaconda3/lib/python3.8/site-packages/optuna/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
    399             )
    400 
--> 401         _optimize(
    402             study=self,
    403             func=func,

/opt/anaconda3/lib/python3.8/site-packages/optuna/_optimize.py in _optimize(study, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
     63     try:
     64         if n_jobs == 1:
---> 65             _optimize_sequential(
     66                 study,
     67                 func,

/opt/anaconda3/lib/python3.8/site-packages/optuna/_optimize.py in _optimize_sequential(study, func, n_trials, timeout, catch, callbacks, gc_after_trial, reseed_sampler_rng, time_start, progress_bar)
    160 
    161         try:
--> 162             trial = _run_trial(study, func, catch)
    163         except Exception:
    164             raise

/opt/anaconda3/lib/python3.8/site-packages/optuna/_optimize.py in _run_trial(study, func, catch)
    265 
    266     if state == TrialState.FAIL and func_err is not None and not isinstance(func_err, catch):
--> 267         raise func_err
    268     return trial
    269 

/opt/anaconda3/lib/python3.8/site-packages/optuna/_optimize.py in _run_trial(study, func, catch)
    214 
    215     try:
--> 216         value_or_values = func(trial)
    217     except exceptions.TrialPruned as e:
    218         # TODO(mamu): Handle multi-objective cases.

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in __call__(self, trial)
    249         booster = lgb.train(self.lgbm_params, train_set, **kwargs)
    250 
--> 251         val_score = self._get_booster_best_score(booster)
    252         elapsed_secs = time.time() - start_time
    253         average_iteration_time = elapsed_secs / booster.current_iteration()

/opt/anaconda3/lib/python3.8/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _get_booster_best_score(self, booster)
    116             raise NotImplementedError
    117 
--> 118         val_score = booster.best_score[valid_name][metric]
    119         return val_score
    120 

KeyError: 'binary_logloss'

Steps to reproduce

Actually, the data I used is the data analysis competition data. You can probably reproduce it with the code below.

Reproducible examples

boston = load_boston()
X_array = boston.data
y_array = boston.target
df = pd.DataFrame(X_array, columns = boston.feature_names).assign(MEDV=np.array(y_array))

df_train, df_val = train_test_split(df, test_size=0.2, random_state=71)
col = 'MEDV'
train_y = df_train[col]
train_x = df_train.drop(col, axis=1)

val_y = df_val[col]
val_x = df_val.drop(col, axis=1)

trains = lgb.Dataset(train_x, train_y)
valids = lgb.Dataset(val_x, val_y)

params = {
    'objective': 'regression',
    'metrics': 'rmse',
    'boosting_type': 'gbdt',
    'verbosity': -1
}

model = lgb.train(params,
                  trains,
                  valid_sets=valids,
                  num_boost_round=1000,
                  early_stopping_rounds=100,
                  verbose_eval=100)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
nzw0301commented, Jun 26, 2021

Awesome! I’m glad to hear that 😃 Thank you for using optuna in your project!

Cheers,

1reaction
nzw0301commented, Jun 26, 2021

Thanks! I think params’s key looks incorrect. Precisely, I suppose metrics should be metric. What do you think about it?

params = {
    'objective': 'regression',
    'metric': 'rmse',
    'boosting_type': 'gbdt',
    'verbosity': -1
}
Read more comments on GitHub >

github_iconTop Results From Across the Web

Raise KeyError when fobj is passed to lgb.train #1854 - GitHub
I'm not familiar with these options of the lightgbm, but does it make sense? def feval(preds, train_data): return "binary_logloss", sklearn.
Read more >
Supressing optunas cv_agg's binary_logloss output
if I tune a model with the LightGBMTunerCV I always get this massive result of the cv_agg's binary_logloss. If I do this with...
Read more >
sklearn.metrics.log_loss — scikit-learn 1.2.0 documentation
Log loss, aka logistic loss or cross-entropy loss. This is the loss function used in (multinomial) logistic regression and extensions of it such...
Read more >
Kento Nozawa on Twitter: "@subaru_prog03 @mamurai1208 ...
github.comKeyError: 'binary_logloss' · Issue #2762 · optuna/optunaEnvironment Optuna version: 2.8.0 Python version: 3.8.5 OS: macOS Big Sur ...
Read more >
Source code for lightgbm.sklearn
_n_classes > 2 else "binary_logloss" elif isinstance(self, LGBMRanker): original_metric = "ndcg" # overwrite default metric by explicitly set ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found