Warning (artifact name is not specified) when tracking multiple metrics.
See original GitHub issueThis part of the code throws warnings and tracks only the last metric.
aim_session.track(m1_l1_loss, name='m1_l1_loss', epoch=step)
aim_session.track(m2_l1_loss, name='m2_l1_loss', epoch=step)
aim_session.track(m2_l2_loss, name='m2_l2_loss', epoch=step)
aim_session.track(avg_loss, name='loss', epoch=step)
Something went wrong in
_track_body
: artifact name is not specified
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:11 (7 by maintainers)
Top Results From Across the Web
How to fix Artifacts not showing in MLflow UI
Open it and check what the artifact_location is specified as. It should be mlruns/0. If it's not, make it this. Similarly, each of...
Read more >Resolve Missing Artifacts, Links, and Results in the Model ...
Troubleshoot artifact tracing and analysis in the Model Testing Dashboard.
Read more >MLflow Tracking — MLflow 2.1.0 documentation
The Tracking Server responds with an artifact store URI location. The MLflow client creates an instance of a LocalArtifactRepository and saves artifacts to...
Read more >Track machine learning training runs | Databricks on AWS
In the Create MLflow Experiment dialog, enter a name for the experiment and an optional artifact location. If you do not specify an...
Read more >Log metrics, parameters and files with MLflow - Azure ...
See Migrate logging from SDK v1 to MLflow for specific guidance. Logs can help you diagnose errors and warnings, or track performance ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@felixkreuk
I solved it by replacing the torch variables with numbers.
m1_l1_loss
toto m1_l1_loss.item()
and so on.Yea, good call. Will open an issue now.