question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

optuna.integration.OptunaSearchCV does not support multi-metric scoring evaluation

See original GitHub issue

Expected behavior

As sklearn.model_selection.RandomizedSearchCV does, optuna.integration.OptunaSearchCV should support multi-metric scoring evaluation.

Environment

  • Optuna version: 2.10.0.dev
  • Python version: 3.8.10
  • OS: Windows-10-10.0.18363-SP0
  • (Optional) Other libraries and their versions:

Error messages, stack traces, or logs

ValueError: For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. ['neg_mean_squared_error', 'accuracy'] was passed.
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_111952/153593425.py in <module>
      7 optuna_search = optuna.integration.OptunaSearchCV(clf, param_distributions, scoring=['neg_mean_squared_error', 'accuracy'], refit='neg_mean_squared_error')
      8 X, y = load_iris(return_X_y=True)
----> 9 optuna_search.fit(X, y)
     10 y_pred = optuna_search.predict(X)

c:\Users\me\AppData\Local\pypoetry\Cache\virtualenvs\DKNdU-0-py3.8\lib\site-packages\optuna\integration\sklearn.py in fit(self, X, y, groups, **fit_params)
    842 
    843         self.n_splits_ = cv.get_n_splits(X_res, y_res, groups=groups_res)
--> 844         self.scorer_ = check_scoring(self.estimator, scoring=self.scoring)
    845 
    846         if self.study is None:

c:\Users\me\AppData\Local\pypoetry\Cache\virtualenvs\DKNdU-0-py3.8\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
     61             extra_args = len(args) - len(all_args)
     62             if extra_args <= 0:
---> 63                 return f(*args, **kwargs)
     64 
     65             # extra_args > 0

c:\Users\me\AppData\Local\pypoetry\Cache\virtualenvs\DKNdU-0-py3.8\lib\site-packages\sklearn\metrics\_scorer.py in check_scoring(estimator, scoring, allow_none)
    453                 % estimator)
    454     elif isinstance(scoring, Iterable):
--> 455         raise ValueError("For evaluating multiple scores, use "
    456                          "sklearn.model_selection.cross_validate instead. "
    457                          "{0} was passed.".format(scoring))

ValueError: For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. ['neg_mean_squared_error', 'accuracy'] was passed.

Steps to reproduce

Reproducible examples (optional)

import optuna
from sklearn.datasets import load_iris
from sklearn.svm import SVC

clf = SVC(gamma="auto")
param_distributions = {"C": optuna.distributions.LogUniformDistribution(1e-10, 1e10)}
optuna_search = optuna.integration.OptunaSearchCV(clf, param_distributions, scoring=['neg_mean_squared_error', 'accuracy'], refit='neg_mean_squared_error')
X, y = load_iris(return_X_y=True)
optuna_search.fit(X, y)
y_pred = optuna_search.predict(X)

Additional context (optional)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
nzw0301commented, Sep 21, 2021

According to a dev member, there was no specific reason for not implementing multiple scoring logic. Nice catch!

1reaction
nzw0301commented, Sep 19, 2021

@tsuga I’ve developed optuna since mid-2020, so honestly I don’t know 😦

But in my understanding, your suggestion makes sense to me for providing better consistency with the scikit-learn’s class. One concern is that Optuna gives multi-objective optimisation, so one might expect that this class with multiple scoring perform multi-objective optimisation — this is not what we discuss here. Thus we need to clarify that if we implement this feature in the documentation.

I’m also asking the optuna-dev team the reason, so please me a time!

Read more comments on GitHub >

github_iconTop Results From Across the Web

optuna.integration.OptunaSearchCV - Read the Docs
Computing training scores is used to get insights on how different hyperparameter settings impact the overfitting/underfitting trade-off. However computing ...
Read more >
support sklearn metric with OptunaSearchCV #1527 - GitHub
Problem: from optuna.integration import OptunaSearchCV from ... as i understood there is no connection between metrics sklearn and catboost.
Read more >
custom scoring function got an error: "scoring must return a ...
I think the culprit of the error is the custom scoring function I made. ... optuna.integration import OptunaSearchCV from optuna.samplers ...
Read more >
Hyperparameter Search With Optuna: Part 1 - Scikit-learn ...
It also has specialized coding to integrate it with many popular machine learning packages to allow the use of pruning algorithms to make ......
Read more >
sklearn.metrics.make_scorer — scikit-learn 1.2.0 documentation
If needs_threshold=True , the score function is supposed to accept the output of decision_function or predict_proba when decision_function is not present.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found