question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Follow-up of MLflowCallback

See original GitHub issue

Motivation

Thanks to the great improvement by https://github.com/optuna/optuna/pull/2670, optuna provides a smoother connection to MLFLow via MLflowCallback. But we still have a TODO list to improve the codebase and its feature mentioned in the review of https://github.com/optuna/optuna/pull/2670.

TODO tasks discussed in https://github.com/optuna/optuna/issues/2788#issuecomment-898890114 to close this issue

Please see also the following Description for more detailed contexts.


Description

Please see the discussion in the review. But we can separately work on this issue as follows:

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
nzw0301commented, Jul 13, 2021

Hi @rafarui thank you for your feedback! Indeed, when we introduce WeightsAndBiasesCallback recently, it has wandb_kwargs that are used for the initialisation of wandb. So we can add a similar argument to MLflowCallback for a more flexible interface as you mentioned.

1reaction
rafaruicommented, Jul 13, 2021

If I can suggest one more point.

  • Deacoplate the study name from mlflow experiment

Optuna is setting the experiment name for the user via the initialize_experiment(self, study: optuna.study.Study). I think we should let the user choose if he/she wants to create an experiment.

An experiment name is not necessarily the study name. I can have many studies inside the same experiment. If you are not using the in-memory database you will have problems. In my use case, studies are unique identified to be able to track them inside one mlflow experiment. Also, this can be a problem If you are using databricks. The experiment name is defined in databricks and not necessarily is the same as the study name. I can have many studies in one single experiment.

I am using this, so I might have a proposal soon.

Read more comments on GitHub >

github_iconTop Results From Across the Web

MLflow Tracking — MLflow 2.0.1 documentation
MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs.
Read more >
Callbacks - Hugging Face
Callbacks are objects that can customize the behavior of the training loop in the PyTorch Trainer (this feature is not yet implemented in...
Read more >
optuna.integration.MLflowCallback - Read the Docs
Callback to track Optuna trials with MLflow. This callback adds relevant information that is tracked by Optuna to MLflow.
Read more >
Experiment Tracking Template with Keras and Mlflow
Sign up for Medium and get an extra one ... Experiment Tracking Template with Keras and Mlflow ... mlflow ui. and obtain the...
Read more >
How to Use MLflow To Reproduce Results and Retrain Saved ...
In part 2 of our series on MLflow blogs, we demonstrated how to use MLflow to track experiment results for a Keras network...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found