question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Log config to wandb

See original GitHub issue

Although there is WandbLoggerHook, but the config of the run is not logged to wandb. Will be nice if this can be done by default as long as WandbLoggerHook is invoked.

Currently, this is what I’m adding to my training script in order to log the config:

wandb_kwargs = [ h['init_kwargs'] for h in cfg.log_config.hooks if h['type'] == 'WandbLoggerHook' ][0]
wandb_kwargs['config'] = cfg._cfg_dict.to_dict()
wandb.init(**wandb_kwargs)
logger.info('Logged cfg to wandb.') 

Is there any way to get access to the cfg object from WandbLoggerHook? I’ll be happy to put in a PR for this if needed.

Thank you!

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
ayulockincommented, Mar 8, 2022

I am trying to achieve the same thing. While investigating, I realized that the WandbLoggerHook methods have access to the runner object. However, the runner has no cfg class variable.

If the constructor of BaseRunner can accept cfg then the WandbLoggerHook (and other hooks) can use it to log the config file, something like this:

def before_run(self, runner):
        super(WandbLogger, self).before_run(runner)
        self.wandb.config = runner.cfg

Something as simple as this, should do the trick.

class BaseRunner(metaclass=ABCMeta):
       ....
      def __init__(self, ...., cfg=None):
           self.cfg = cfg

In the train.py we can then do runner.cfg = cfg.

Thoughts? If this is an acceptable solution, I can make a PR.

0reactions
ayulockincommented, Mar 24, 2022

Hey @zhouzaida, would love to know what you think about it. I can make a PR if you would like.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Configure Experiments with wandb.config - Documentation
Set the wandb.config object in your script to save your training configuration: hyperparameters, input settings like dataset name or model type, ...
Read more >
wandb — PyTorch Lightning 1.8.5.post0 documentation
class pytorch_lightning.loggers.wandb. ... wandb.log({"train/loss": loss}) ... add one parameter wandb_logger.experiment.config["key"] = value # add ...
Read more >
How to use the wandb.config.update function in wandb - Snyk
logger if self.args.log: wandb.init() wandb.config.update(self.hyper_params) # wandb.watch([self.actor, self.critic], log="parameters") # pre-training if ...
Read more >
wandb - PyPI
Run wandb login from your terminal to signup or authenticate your machine (we store your api key in ~/.netrc). You can also set...
Read more >
Using Weights & Biases with Tune - the Ray documentation
Wandb -Mixin# · config – Configuration dict to be logged to weights and biases. Can contain arguments for wandb. · rank_zero_only – If...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found