summarywriter on each process?
See original GitHub issueDescription
It looks like figures are being drawn to TensorBoard on every process. I could be wrong, but would it be better in the multi-gpu/multi-node case to only draw on the rank zero process using pytorch_lightning.utilities.distributed.rank_zero_only
?
Steps to reproduce
I was looking into why validation was slow and noticed the plotting portion. Though this probably wouldn’t speed anything up, I wonder if each process is trying to overwrite eachother’s TB logs. Could be missing something though!
Version
0.4.0.dev0
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:5
Top Results From Across the Web
torch.utils.tensorboard — PyTorch 1.13 documentation
The SummaryWriter class provides a high-level API to create an event file in a given directory and add summaries and events to it....
Read more >tensorboardX documentation - Read the Docs
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters: logdir – Save directory location. Default is runs/ ......
Read more >machine-learning-articles/how-to-use-tensorboard ... - GitHub
The SummaryWriter class provides a high-level API to create an event file in a given directory and add summaries and events to it....
Read more >How to use PyTorch TensorBoard with Example? - eduCBA
writer = SummaryWriter(). We have to note down all the values and scalars to help save the same. We can use the flush()...
Read more >Tutorial: How to use Tensorboard with Pytorch - Galaxy Inferno
from torch.utils.tensorboard import SummaryWriter writer ... I'm tracking are the loss and the achieved accuracy during each training epoch.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think because of this: https://github.com/Lightning-AI/lightning/blob/4eb7766f3c222b95216f5e2831894a5143f70882/src/pytorch_lightning/loggers/tensorboard.py#L157-L158 And https://github.com/Lightning-AI/lightning/blob/4eb7766f3c222b95216f5e2831894a5143f70882/src/pytorch_lightning/loggers/logger.py#L34 returning a DummyExperiment
Turns out I was wrong about this!
Adrian Wälchli: “It’s taken care of by Lightning already. If you use self.log or like you show above, tainer.logger.experiment, it won’t log on rank > 0. We consider this boilerplate, and thus handle it for the user.”