💡 Dedicated WandbLogger for MMDetection (Weights and Biases Integration)
See original GitHub issueDescription
MMCV has a WandbLoggerHook
(source) that can log metrics with Weights and Biases (W&B) and log saved models, log files, etc. as W&B Artifacts. Given it is part of MMCV, and other MM based repositories use it, I propose to have a dedicated Logger for MMDetection that can:
- Log both training and evaluation metrics.
- Log model checkpoints as W&B Artifacts.
- Log validation dataset with ground truth bounding boxes.
- Log model bounding box prediction as interactive W&B Tables.
I have implemented the mentioned features in my fork of the repo. Still, before making a PR, I would love to know what the maintainers of this repo and community, in general, think about the usefulness of a dedicated W&B Logger.
Overview of Implementation Detail
I have inherited the WandbLoggerHook
from MMCV and implemented WandbLogger
, which can you can use as shown below:
log_config = dict(
interval=10,
hooks=[
dict(type='WandbLogger',
init_kwargs={'entity': WANDB_ENTITY, 'project': WANDB_PROJECT_NAME},
interval=10,
log_checkpoint=True,
log_checkpoint_metadata=True,
log_evaluation=True)
])
Features
Metrics
- The WandbLogger will automatically log training and validation metrics.
- It will log system (CPU/GPU) metrics.
- Passing a configuration dictionary to
init_kwargs
will log it automatically.
Checkpointing
If log_checkpoint
is True
, the checkpoint saved at every checkpoint interval will be saved as W&B Artifacts. To know more about model versioning with W&B Artifacts, please refer to the docs here. This feature makes use of the CheckpointHook
.
Checkpointing with Metadata
If log_checkpoint_metadata
is True
, every checkpoint artifact will have metadata associated with it. The metadata contains the evaluation metrics computed on the validation data using that checkpoint and the current epoch that checkpoint belongs. If True
also marks the checkpoint version with the best evaluation metric with a ‘best’ alias. You can choose the best checkpoint in the W&B Artifacts UI using this. This feature depends on EvalHook
along with CheckpointHook
.
Log Model Prediction 🎉
If log_evaluation
is True
, at every evaluation interval, the WandbLogger
logs the model prediction as interactive W&B Tables. To know more about W&B Tables, please refer to the docs here. The WandbLogger
logs the predicted bounding boxes along with the ground truth bounding boxes at every evaluation interval. This feature depends on the EvalHook
. Note that the data is logged once, and subsequent evaluation tables reference the logged data to save memory usage.
Motivation
- The W&B integration with YOLOv5 was one of the primary motivations.
- I tend to use MMDetection for Kaggle competitions, and being a W&B user, I was motivated to build this feature.
More Examples
- I have tested my implementation of a tiny_coco dataset as well. Here’s the W&B Workspace that you can check out to see the implemented features in action.
I would love to make a PR with this feature if you all think it is valuable for the community.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:5 (2 by maintainers)
Thanks for your quick response. I will be making the PR by the end of tomorrow. Just polishing up a few things. Looking for feedback then. 😃
Hi @ayulockin , Yes! This feature is valuable for the codebase so as the community. Feel free to create a PR. We will organize review A.S.A.P.