question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Tensorboard Logging

See original GitHub issue

Currently tensorboard_logger is used. I don’t think it’s actively supported anymore, they point to pytorch’s own tensorboard-module on their Github. Why bother? tensorboard_logger uses a singleton-default-logger and I cannot reset the path to write the tensorboard-eventfiles. That would be helpful though, to distinguish between different configurations when refitting configurations from the incumbent-trajectory 😃

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
maxmarketitcommented, Jun 12, 2020

@shukon I might not understand the situation here but I do not see any problem because if the each training of NN starts and ends one summaryWriter, is there really any requirement to pass the writer?

I think we can set the root directory and every model trained can saved to subdirectory with appropriate name which convealing main architecture ‘resnet-100-20-30’ or ‘shapedmlpnet-diamond-…-…’ and as the optimization goes, it could change to resnet-100-20-30-i01’ ‘-i02’ ‘-i03’ etc

1reaction
maxmarketitcommented, Jun 9, 2020

In my opinion, it would be better if seperate logging is made for each model.

Something like,

from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(filename)

Different writer for different model. Each model will have its own logging directory and we can do tensorboard --log-dir root to see and compare all the models.

I think we can have another writer for monitoring loss for all the models…

But as the number of models gets large, it will be soon very crowded.

But you can filter some of the models.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Get started with TensorBoard - TensorFlow
TensorBoard.dev is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with...
Read more >
Deep Dive Into TensorBoard: Tutorial With Examples
The tool enables you to track various metrics such as accuracy and log loss on training or validation set. As we shall see...
Read more >
Tensorboard quick start in 5 minutes. - Anthony Sarkis - Medium
Start Tensorboard server (< 1 min). Open a terminal window in your root project directory. Run: tensorboard --logdir logs/1.
Read more >
How to use TensorBoard with PyTorch
In this tutorial we are going to cover TensorBoard installation, basic usage with PyTorch, and how to visualize data you logged in TensorBoard...
Read more >
TensorBoard Tutorial: Run Examples & Use Logdir - DataCamp
Starting TensorBoard · Open up the command prompt (Windows) or terminal (Ubuntu/Mac) · Go into the project home directory · If you are...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found