[HowTo] AR in validationset evaluation for TensorBoard
See original GitHub issueHi,
i really like mmdetection, it really makes live easier to train a variety of detectors and coco datasets.
What i am searching for a while is a way on how to get the AR metrics loggable in validation Eval Hooks like the Tensorboard or MlFlow using COCO datasets. The AP is already logged perfectly there. What do i need to configure at which spot to get the metrics also there?
It looks like there should be a way to activate this , as one can set the Metrics to be sent out from the coco eval in the specific file: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/datasets/coco.py#L574
If i also understood the codebase well enough, it is also possible to send arguments for validationset evaulation through the train.py api using the evaluation
-dict:
https://github.com/open-mmlab/mmdetection/blob/master/mmdet/apis/train.py#L231
But i dont yet understand what i need to set in the specific training configuration to get those metrics finally in my tensorboard or other hooks. Can someone tell if this is possible or how to do it?
Thank you very much!
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:5
All training information is stored in
runner.log_buffer
, more complex operations need to be implemented by yourself https://github.com/open-mmlab/mmdetection/blob/ca11860f4f3c3ca2ce8340e2686eeaec05b29111/mmdet/core/evaluation/eval_hooks.py#L134I have same question.The mAP info can not show on the tensorboard.