Multiple ProgressBars Spamming
See original GitHub issue🐛 Bug description
When using the ProgressBar handler for both my train and my evaluation engine
train_pbar = ProgressBar() train_pbar.attach(train_engine) eval_pbar = ProgressBar(desc="Evaluation") eval_pbar.attach(eval_engine)
Or even when using common.setup_common_training_handlers from contrib.engines:
common.setup_common_training_handlers(train_engine, to_save=to_save, save_every_iters=saving_rate, output_path=output_path, lr_scheduler=lr_scheduler, with_pbars=True, with_pbar_on_iters=True, log_every_iters=1, device=device)
The progress bars print a new line for every increment of the calculation (every iteration). My engines have some other handlers on both engines (early stopping handlers, some custom logging etc.), but there’s no printing that’s causing the error. It looks like so (for the second example, with setup_common_training_handlers):
Epoch [1/10]: [504/6367] 8%|▊ [12:59<2:31:09]
Epoch [1/10]: [504/6367] 8%|▊ [13:00<2:31:09]
Epoch [1/10]: [505/6367] 8%|▊ [13:00<2:30:59]
Epoch [1/10]: [505/6367] 8%|▊ [13:02<2:30:59]
Epoch [1/10]: [506/6367] 8%|▊ [13:02<2:30:54]
Epoch [1/10]: [506/6367] 8%|▊ [13:03<2:30:54]
Epoch [1/10]: [507/6367] 8%|▊ [13:03<2:30:54]
Epoch [1/10]: [507/6367] 8%|▊ [13:05<2:30:54]
Epoch [1/10]: [508/6367] 8%|▊ [13:05<2:31:07]
Is there any common cause to this? My code is inspired from the example here. The trainer is created with create_supervised_trainer and the evaluator with create_supervised_evaluator.
Environment
- PyTorch Version 1.4
- Ignite Version 0.3.0
- OS: Windows 10.0.18363 build 18363
- Ignite installed using conda
- Python version: 3.7.6
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (3 by maintainers)
Top GitHub Comments
@vfdev-5 definitely, thank you.
To avoid
tqdm
printing each iteration on a new line, use this class instead :Kudos to this post