Add `enable_device_summary` flag to disable device printout
See original GitHub issue🚀 Feature
Add enable_model_summary
boolean kwarg to pl.Trainer()
to supress _log_device_info()
's output.
Motivation
When calling predict within a surrogate model loop Trainer prints out the devices each time breaking apart intended tables etc or other outputs. Related to https://github.com/Lightning-AI/lightning/issues/13358 for cleaning-up/reducing stdout verbosity.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
=======================================================
n_gen | n_eval | n_nds | eps | indicator
=======================================================
1 | 322 | 3 | - | -
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
2 | 1322 | 4 | 0.625000000 | ideal
Pitch
Add enable_model_summary
kwarg to Trainer
that defaults to True
Alternatives
The suggested solution is the simplest solution, any alternative would add more complexity.
Additional context
None
If you enjoy Lightning, check out our other projects! ⚡
-
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
-
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
-
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
-
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
-
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @borda @awaelchli @ananthsub @rohitgr7 @justusschock @kaushikb11
Issue Analytics
- State:
- Created a year ago
- Comments:11 (10 by maintainers)
I changed my mind. I think the
callback
proposal is the simplest and most extensible option. This would also resolve https://github.com/Lightning-AI/lightning/issues/11014. And we could have flags in the callback to disable specific prints.I’m curious, is there a desire to have verbosity controlled on a more global level, not just the summary here?