question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Tensorboard: PyTorch writer.add_scalars() does not work as intented.

See original GitHub issue

I tried the code as follows:

from torch.utils.tensorboard import SummaryWriter
import wandb
wandb.tensorboard.patch(tensorboardX=False, pytorch=True)
wandb.init()
writer = SummaryWriter()
scalars = {
    'scalar_1': 1.0,
    'scalar_2': 2.0,
    'scalar_3': 3.0
}
writer.add_scalars('test', scalars, global_step=1)
writer.close()
wandb.finish()

My observations:

  1. There is a warning that I face with my implementation which is as follows:
wandb: WARNING When using several event log directories, please call wandb.tensorboard.patch(root_logdir="...") before wandb.init

I have tried figuring out the root_dir parameter, due to the unavailability of a docstring or documentation, I was not able to remove the warning. 2. If we ignore the warning the tensorboard integration in wandb does not really work like the one in colab.

The file structure of the event files that are logged by the SummaryWriter are But in the wandb folder the event files are not structured the same way
image image

Here we do not see the folders namely test_scalar_1, test_scalar_2, test_scalar_3.

This leads us to the bug in the tensorboard integration

Colab tensorboard Tensorboard in wandb
image image

Here we do not notice the anything in the runs section of wandb tensorboard.

I am suspecting a probable issue with https://github.com/wandb/client/blob/30f77c8a320fe42a519f76eb012e61101afad9ba/wandb/integration/tensorboard/monkeypatch.py#L90

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
issue-label-bot[bot]commented, Dec 1, 2020

Issue-Label Bot is automatically applying the label bug to this issue, with a confidence of 0.88. Please mark this comment with 👍 or 👎 to give our bot feedback!

Links: app homepage, dashboard and code for this bot.

1reaction
raubitsjcommented, Dec 16, 2020

This fix will go out in the 0.10.13 release

Read more comments on GitHub >

github_iconTop Results From Across the Web

Tensorboard: PyTorch writer.add_scalars() does not work as ...
I tried the code as follows: from torch.utils.tensorboard import ... Tensorboard: PyTorch writer.add_scalars() does not work as intented.
Read more >
torch.utils.tensorboard — PyTorch 1.13 documentation
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters: log_dir (str) – Save directory location. Default is...
Read more >
TensorboardX input problem about add_scalar()
So, we found the reason of error, add_scalar expects 0-d scalar after squeeze operation and we gave it a 1-d scalar. Tensorboard page...
Read more >
tensorboardX documentation - Read the Docs
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters: logdir – Save directory location. Default is runs/ ......
Read more >
Tutorial: How to use Tensorboard with Pytorch - Galaxy Inferno
This writer will output its logs to the ./runs/ directory by default. After you have run your code and if you do not...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found