More modular fit() and support for progress and logging callbacks
See original GitHub issueHello,
Is there a plan to make .fit
more modular?
For context, I am integrating the library in an async worker and I want to use python/tensorboard/wandb logging to log metrics, losses, etc every n steps or epochs. The fit function at the moment is not modular enough for me to inherit and override the right points. There is callback fn support but that only works if some evaluator is provided. For e.g. libraries like fastai, pytorch lightning provide callbacks for before/after batch/epoch etc
[1] https://docs.fast.ai/callback.progress.html [2] https://pytorch-lightning.readthedocs.io/en/latest/extensions/callbacks.html
If acceptable I can help work on a PR for this 😄
Issue Analytics
- State:
- Created 2 years ago
- Reactions:3
- Comments:6 (1 by maintainers)
Top Results From Across the Web
Logging — PyTorch Lightning 1.8.5.post0 documentation
Logging. Supported Loggers. The following are loggers we support: CometLogger. Track your parameters, metrics, source code and more using Comet.
Read more >Writing your own callbacks | TensorFlow Core
TensorBoard to visualize training progress and results with ... Model.fit() · keras. ... Now, define a simple custom callback that logs:.
Read more >Trainer - Hugging Face
Another way to customize the training loop behavior for the PyTorch Trainer is to use callbacks that can inspect the training loop state...
Read more >Show Loss Every N Batches · Issue #2850 · keras-team/keras
Is there a way to get a more granular control on this Progress Bar ... looking at the source code in the Callbacks...
Read more >Keras ValueError: I/O operation on closed file - Stack Overflow
In Keras document for model.fit, you can find this: "verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
chiragjn’s loss function looks great, but I was turned away by how complicated it looked. For anyone else looking for a quick hack, I think this is the basic idea:
Looking forward to hooks and proper integration! 😃
@skewwhiff Please feel free to take this up, sorry for the lack of updates. I did some prototyping back then but I was not happy with the architecture myself. I think API like fastai or pytorch-lightning would be pretty good. I’ll be happy to help in any way I can