question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

More modular fit() and support for progress and logging callbacks

See original GitHub issue

Hello, Is there a plan to make .fit more modular?

For context, I am integrating the library in an async worker and I want to use python/tensorboard/wandb logging to log metrics, losses, etc every n steps or epochs. The fit function at the moment is not modular enough for me to inherit and override the right points. There is callback fn support but that only works if some evaluator is provided. For e.g. libraries like fastai, pytorch lightning provide callbacks for before/after batch/epoch etc

[1] https://docs.fast.ai/callback.progress.html [2] https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html


If acceptable I can help work on a PR for this 😄

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:3
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

3reactions
Exr0ncommented, Dec 29, 2021

chiragjn’s loss function looks great, but I was turned away by how complicated it looked. For anyone else looking for a quick hack, I think this is the basic idea:

class LoggingLoss:
    def __init__(self, loss_fn, wandb):
        self.loss_fn = loss_fn
        self.wandb = wandb

    def __call__(self, logits, labels):
        loss = self.loss_fn(logits, labels)
        self.wandb.log({ 'train_loss': loss })
        return loss

# ...

wandb.init()
wandb.watch(model.model)
model.fit(
    # ...
    loss_fct=LoggingLoss(torch.nn.BCEWithLogitsLoss(), wandb),
    # ...
)

Looking forward to hooks and proper integration! 😃

1reaction
chiragjncommented, Nov 2, 2021

@skewwhiff Please feel free to take this up, sorry for the lack of updates. I did some prototyping back then but I was not happy with the architecture myself. I think API like fastai or pytorch-lightning would be pretty good. I’ll be happy to help in any way I can

Read more comments on GitHub >

github_iconTop Results From Across the Web

Logging — PyTorch Lightning 1.8.5.post0 documentation
Logging. Supported Loggers. The following are loggers we support: CometLogger. Track your parameters, metrics, source code and more using Comet.
Read more >
Writing your own callbacks | TensorFlow Core
TensorBoard to visualize training progress and results with ... Model.fit() · keras. ... Now, define a simple custom callback that logs:.
Read more >
Trainer - Hugging Face
Another way to customize the training loop behavior for the PyTorch Trainer is to use callbacks that can inspect the training loop state...
Read more >
Show Loss Every N Batches · Issue #2850 · keras-team/keras
Is there a way to get a more granular control on this Progress Bar ... looking at the source code in the Callbacks...
Read more >
Keras ValueError: I/O operation on closed file - Stack Overflow
In Keras document for model.fit, you can find this: "verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found