question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Record history of hparams metrics

See original GitHub issue

I’m unsure how to record metrics during training such that when I run add_hparams() at the end, the show metrics graphs in the HPARAMS tab contains more than a single value. What I would like to do is something like this:

w.add_scalar("loss", 10, 0)
w.add_scalar("loss", 11, 1)
w.add_scalar("loss", 12, 2)
w.add_hparams({'lr': X, 'bsize': Y, 'n_hidden': Z}, {'loss': 13})

And then the show metrics graph would have the four points [10,11,12,13] plotted for the loss metric.

Thanks!

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
asfordcommented, Aug 26, 2019

@rohaldbUni Then TensorBoard HParams plugin will aggregate and report scalars from your run into the reporting page, you just need to manually write the experimental overview into the tensorboard event stream. For your example:


experiment, start_summary, end_summary = tensorboardX.summary.hparams(
    {'lr': X, 'bsize': Y, 'n_hidden': Z}, {'loss': None}
)

w.file_writer.add_summary(experiment)
w.file_writer.add_summary(start_summary)

w.add_scalar("loss", 10, 0)
w.add_scalar("loss", 11, 1)
w.add_scalar("loss", 12, 2)
w.add_scalar("loss", 13, 2)

Should report the full “loss” trace on the hparams summary page.

0reactions
asfordcommented, Aug 26, 2019

@lanpa Is this repo the primary development point for tensorboardX, or is the module being folded into pytorch mainline development? I’ve noticed a bit of discussion there on https://github.com/pytorch/pytorch/pull/23134 and https://github.com/pytorch/pytorch/issues/16838.

I’m currently using a method like the one above for hparam reporting in pytorch, but I’d be happy to expand and document the current tensorboardX interface to cover this use case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Hyperparameter Tuning with the HParams Dashboard
The HParams dashboard in TensorBoard provides several tools to help with this process of identifying the best experiment or most promising sets ...
Read more >
TensorBoard: Hyperparameter Optimization
We also set the metrics as accuracy to be displayed on the TensorBoard ... hp.hparams(hparams) # record the values used in this trial...
Read more >
python - tensorflow tensorboard hparams - Stack Overflow
I have tried to use hparams in TF. I have set dropout , l2 and OPTIMIZER . I need to set value for...
Read more >
Deep Dive Into TensorBoard: Tutorial With Examples
In this piece, we'll focus on TensorFlow's open-source visualization toolkit TensorBoard. The tool enables you to track various metrics such as accuracy and...
Read more >
Logging — PyTorch Lightning 1.8.5.post0 documentation
If you want to track a metric in the tensorboard hparams tab, log scalars to the key hp_metric . If tracking multiple metrics,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found