Tensorboard Logging
See original GitHub issueCurrently tensorboard_logger
is used. I don’t think it’s actively supported anymore, they point to pytorch’s own tensorboard-module on their Github.
Why bother? tensorboard_logger
uses a singleton-default-logger and I cannot reset the path to write the tensorboard-eventfiles. That would be helpful though, to distinguish between different configurations when refitting configurations from the incumbent-trajectory 😃
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
Get started with TensorBoard - TensorFlow
TensorBoard.dev is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with...
Read more >Deep Dive Into TensorBoard: Tutorial With Examples
The tool enables you to track various metrics such as accuracy and log loss on training or validation set. As we shall see...
Read more >Tensorboard quick start in 5 minutes. - Anthony Sarkis - Medium
Start Tensorboard server (< 1 min). Open a terminal window in your root project directory. Run: tensorboard --logdir logs/1.
Read more >How to use TensorBoard with PyTorch
In this tutorial we are going to cover TensorBoard installation, basic usage with PyTorch, and how to visualize data you logged in TensorBoard...
Read more >TensorBoard Tutorial: Run Examples & Use Logdir - DataCamp
Starting TensorBoard · Open up the command prompt (Windows) or terminal (Ubuntu/Mac) · Go into the project home directory · If you are...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@shukon I might not understand the situation here but I do not see any problem because if the each training of NN starts and ends one
summaryWriter
, is there really any requirement to pass the writer?I think we can set the root directory and every model trained can saved to subdirectory with appropriate name which convealing main architecture ‘resnet-100-20-30’ or ‘shapedmlpnet-diamond-…-…’ and as the optimization goes, it could change to resnet-100-20-30-i01’ ‘-i02’ ‘-i03’ etc
In my opinion, it would be better if seperate logging is made for each model.
Something like,
Different
writer
for different model. Each model will have its own logging directory and we can dotensorboard --log-dir root
to see and compare all the models.I think we can have another
writer
for monitoring loss for all the models…But as the number of models gets large, it will be soon very crowded.
But you can filter some of the models.