question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[BUG] run_name doesn't work with start_run

See original GitHub issue

Issues Policy acknowledgement

  • I have read and agree to submit bug reports in accordance with the issues policy

Willingness to contribute

Yes. I can contribute a fix for this bug independently.

MLflow version

  • Client: 1.30.0

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Ventura 13.0
  • Python version: 3.9.10
  • yarn version, if running the dev UI: N/A

Describe the problem

When using MLFlow with the context manager, the specified run_name isn’t saved to the MLFlow UI.

This issue only happens with version 1.30.0, I tried the following versions as well: 1.27.0, 1.28.0 and 1.29.0. For all of those versions, the run_name was saved properly.

Tracking information

MLflow version: 1.30.0 Tracking URI: https://XXX.com/ Artifact URI: s3://XXX/XXX/artifacts

Code to reproduce issue

import mlflow

mlflow.set_experiment("mlflow_issue_experiment")
with mlflow.start_run(run_name="test_version_1.30.0"):
    mlflow.log_metric("score", 1)

Stack trace

N/A

Other info / logs

No response

What component(s) does this bug affect?

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/pipelines: Pipelines, Pipeline APIs, Pipeline configs, Pipeline Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

What interface(s) does this bug affect?

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

What language(s) does this bug affect?

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

What integration(s) does this bug affect?

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
Cokralcommented, Nov 2, 2022

@harupy thanks! I opened a PR. First time I do any open-source so I might’ve missed something.

@dbczumar it seems to be a quite old version, 1.20.2

Thank you guys for the quick feedback in any case 😃

1reaction
harupycommented, Nov 2, 2022

@Cokral We can make the following change to fix this issue. Looking forward to your PR 😃

diff --git a/mlflow/tracking/fluent.py b/mlflow/tracking/fluent.py
index a67a0f770..3cfb49473 100644
--- a/mlflow/tracking/fluent.py
+++ b/mlflow/tracking/fluent.py
@@ -33,6 +33,7 @@ from mlflow.utils.autologging_utils import (
 from mlflow.utils.import_hooks import register_post_import_hook
 from mlflow.utils.mlflow_tags import (
     MLFLOW_PARENT_RUN_ID,
+    MLFLOW_RUN_NAME,
     MLFLOW_RUN_NOTE,
     MLFLOW_EXPERIMENT_PRIMARY_METRIC_NAME,
     MLFLOW_EXPERIMENT_PRIMARY_METRIC_GREATER_IS_BETTER,
@@ -342,6 +343,8 @@ def start_run(
             user_specified_tags[MLFLOW_RUN_NOTE] = description
         if parent_run_id is not None:
             user_specified_tags[MLFLOW_PARENT_RUN_ID] = parent_run_id
+        if run_name:
+            user_specified_tags[MLFLOW_RUN_NAME] = run_name
 
         resolved_tags = context_registry.resolve_tags(user_specified_tags)
Read more comments on GitHub >

github_iconTop Results From Across the Web

[BUG] Run name not set when using start_run inside ... - GitHub
Please fill in this bug report template to ensure a time. ... [BUG] Run name not set when using start_run inside a MLproject...
Read more >
Is it possible to set/change mlflow run name after run initial ...
It is possible to edit run names from the MLflow UI. First, click into the run whose name you'd like to edit. Then,...
Read more >
MLflow 2.0.1 documentation
Search can work with experiment IDs or experiment names, but not both in the same call. Values other than None or [] will...
Read more >
Track machine learning training runs | Databricks on AWS
Learn about experiments and tracking machine learning training runs using MLflow.
Read more >
CircleCI was unable to run the job runner - Build Environment
I am super confused by this error. Having the same problem since morning, but somehow the status page for CircleCi stays all green....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found