[FR] Still show Run ID in the UI when a Run Name is specified
See original GitHub issueWillingness to contribute
No. I cannot contribute this feature at this time.
Proposal Summary
When you specify a run name, the run ID is no longer visible in the UI (on neither the Experiment page nor that Run’s page). It would be very helpful to keep the Run ID on at least one of those pages for easier access than searching for that run id in the backend database.
Motivation
What is the use case for this feature?
Run IDs are unique identifiers and as such, are valuable to use as reference to specific runs between scripts.
Why is this use case valuable to support for MLflow users in general?
Specifying a run name is a useful feature, but it comes at the expense of losing the Run ID in the UI, which is also a useful feature. It would be valuable to have both a run’s name and its unique identifier.
Why is this use case valuable to support for your project(s) or organization?
We use MLflow run names similar to git commit messages, so that there is a brief message about why that run happened. This helps with inter-developer collaboration, as well as keeps a quick reminder of what was changing between runs. We also use the MLflow run ID to identify the unique model artifact (e.g., model.joblib artifact) that we want to promote. For us, promotion comes in the form of uploading that model artifact from successful runs to an S3 bucket, using the run ID in the object key name within the bucket, and then using SageMaker’s Python SDK to deploy that model on AWS.
Why is it currently difficult to achieve this use case?
Run ID is lost in the UI when you specify a run name. It would be more convenient to avoid going to the backend database each time to check a run ID.
Details
No response
What component(s) does this bug affect?
-
area/artifacts
: Artifact stores and artifact logging -
area/build
: Build and test infrastructure for MLflow -
area/docs
: MLflow documentation pages -
area/examples
: Example code -
area/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registry -
area/models
: MLmodel format, model serialization/deserialization, flavors -
area/projects
: MLproject format, project running backends -
area/scoring
: MLflow Model server, model deployment tools, Spark UDFs -
area/server-infra
: MLflow Tracking server backend -
area/tracking
: Tracking Service, tracking client APIs, autologging
What interface(s) does this bug affect?
-
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev server -
area/docker
: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models -
area/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registry -
area/windows
: Windows support
What language(s) does this bug affect?
-
language/r
: R APIs and clients -
language/java
: Java APIs and clients -
language/new
: Proposals for new client languages
What integration(s) does this bug affect?
-
integrations/azure
: Azure and Azure ML integrations -
integrations/sagemaker
: SageMaker integrations -
integrations/databricks
: Databricks integrations
Issue Analytics
- State:
- Created a year ago
- Comments:12 (7 by maintainers)
Sure, we’ll link it once it’s created 😃
Awesome, thank you! Could you please link the different PR for Column selector on this issue as well? Much appreciated