Record history of hparams metrics
See original GitHub issueI’m unsure how to record metrics during training such that when I run add_hparams()
at the end, the show metrics
graphs in the HPARAMS tab contains more than a single value. What I would like to do is something like this:
w.add_scalar("loss", 10, 0)
w.add_scalar("loss", 11, 1)
w.add_scalar("loss", 12, 2)
w.add_hparams({'lr': X, 'bsize': Y, 'n_hidden': Z}, {'loss': 13})
And then the show metrics
graph would have the four points [10,11,12,13] plotted for the loss metric.
Thanks!
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Hyperparameter Tuning with the HParams Dashboard
The HParams dashboard in TensorBoard provides several tools to help with this process of identifying the best experiment or most promising sets ...
Read more >TensorBoard: Hyperparameter Optimization
We also set the metrics as accuracy to be displayed on the TensorBoard ... hp.hparams(hparams) # record the values used in this trial...
Read more >python - tensorflow tensorboard hparams - Stack Overflow
I have tried to use hparams in TF. I have set dropout , l2 and OPTIMIZER . I need to set value for...
Read more >Deep Dive Into TensorBoard: Tutorial With Examples
In this piece, we'll focus on TensorFlow's open-source visualization toolkit TensorBoard. The tool enables you to track various metrics such as accuracy and...
Read more >Logging — PyTorch Lightning 1.8.5.post0 documentation
If you want to track a metric in the tensorboard hparams tab, log scalars to the key hp_metric . If tracking multiple metrics,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@rohaldbUni Then TensorBoard HParams plugin will aggregate and report scalars from your run into the reporting page, you just need to manually write the experimental overview into the tensorboard event stream. For your example:
Should report the full “loss” trace on the hparams summary page.
@lanpa Is this repo the primary development point for tensorboardX, or is the module being folded into pytorch mainline development? I’ve noticed a bit of discussion there on https://github.com/pytorch/pytorch/pull/23134 and https://github.com/pytorch/pytorch/issues/16838.
I’m currently using a method like the one above for hparam reporting in pytorch, but I’d be happy to expand and document the current tensorboardX interface to cover this use case.