question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[FR] Be able to set the description of a run programmatically

See original GitHub issue

Thank you for submitting a feature request. Before proceeding, please review MLflow’s Issue Policy for feature requests and the MLflow Contributing Guide.

Please fill in this feature request template to ensure a timely and thorough response.

Willingness to contribute

The MLflow Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature (either as an MLflow Plugin or an enhancement to the MLflow code base)?

  • Yes. I can contribute this feature independently.
  • Yes. I would be willing to contribute this feature with guidance from the MLflow community.
  • No. I cannot contribute this feature at this time.

Proposal Summary

Not sure if this is possible already, but it would be nice to have some functionality to set the description of a run programmatically. Checking through the documentation and issues I haven’t found a way of doing this yet. This could manifest as one or more of:

  • mlflow.start_run would have a parameter description that would be just an Optional[str].
  • A new function mlflow.set_description taking a single string description.
  • As a possible extra mlflow.get_description would retrieve the current description.
  • Probably would need to modify the mlflow.entities module as well.

Motivation

  • What is the use case for this feature? Being able to provide some text description when submitting a job from the command line at the start of the run instead of going to the web UI after the job is triggered.

  • Why is this use case valuable to support for MLflow users in general? Users could start runs with a particular set of hyper-parameters and provide a some description of what its trying to achieve/test without starting the server.

  • Why is this use case valuable to support for your project(s) or organization? At the moment I am triggering runs using argument parsers in the command line. It would be nice to just set the description at the point of starting the run and not have to worry about it later.

  • Why is it currently difficult to achieve this use case? (please be as specific as possible about why related MLflow features and components are insufficient) It isn’t that much effort for me to set the description in the web UI. But I felt that if it was possible to set tags programmatically, then being able to set descriptions also would make the logging more feature complete.

What component(s), interfaces, languages, and integrations does this feature affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interfaces

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Languages

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

Details

If this feature exists or has been raised already feel free to close the issue.

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
BenWilson2commented, Mar 25, 2022

In the meantime, though, if you would like to populate this field within the UI, the tag key that you need to set is:

'mlflow.note.content'

So, to set it during run creation, you’ll do something like this:


project_description = ("This is a test of writing something that explains to people looking at the UI entry just why "
                       "I chose to run this. It may be for my future self reference too. \n"
                       "If so, hey there, future me. What's up? How's it going?"
                      ) 
                       
with mlflow.start_run(run_name="My Custom Run!", 
                      tags={
                             "My_custom_tag": "A very interesting project", 
                             "mlflow.note.content": project_description
                            }
                      ):
  mlflow.log_param("data", 42)
  mlflow.log_metric("awesome_level", 9001)

And you’ll see the description box populated with text.

0reactions
dogepluspluscommented, Mar 26, 2022

Now that it’s the weekend I had some time to give this a go: #5534. Feel free to make changes directly on it or suggest ones that I can try to implement. Will try to re-run the doc building again once the jinja2 issues get resolved.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Set run description programmatically in mlflow - Stack Overflow
There are two ways to set the description. 1. description parameter. You can set a description using a markdown string for your run...
Read more >
How to: Write Services Programmatically - .NET Framework
You must create a Main method for your service project that defines the services to run and calls the Run method on them....
Read more >
How can I set descriptions on snapshots and baselines ...
In the GUI is is possible to set and modify a description for snapshots and baselines. However, our team automates everything.
Read more >
Programmatically Create and Run Test Sequence Scenarios
This example shows how to create and define multiple test scenarios in a single Test Sequence block. Being able to define more than...
Read more >
MLflow Tracking — MLflow 2.0.1 documentation
You can optionally organize runs into experiments, which group together runs for a specific task. You can create an experiment using the mlflow...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found