question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[HowTo] AR in validationset evaluation for TensorBoard

See original GitHub issue

Hi,

i really like mmdetection, it really makes live easier to train a variety of detectors and coco datasets.

What i am searching for a while is a way on how to get the AR metrics loggable in validation Eval Hooks like the Tensorboard or MlFlow using COCO datasets. The AP is already logged perfectly there. What do i need to configure at which spot to get the metrics also there?

It looks like there should be a way to activate this , as one can set the Metrics to be sent out from the coco eval in the specific file: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/datasets/coco.py#L574

If i also understood the codebase well enough, it is also possible to send arguments for validationset evaulation through the train.py api using the evaluation-dict: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/apis/train.py#L231

But i dont yet understand what i need to set in the specific training configuration to get those metrics finally in my tensorboard or other hooks. Can someone tell if this is possible or how to do it?

Thank you very much!

Issue Analytics

  • State:open
  • Created a year ago
  • Reactions:1
  • Comments:5

github_iconTop GitHub Comments

1reaction
Czm369commented, Jun 21, 2022

All training information is stored in runner.log_buffer, more complex operations need to be implemented by yourself https://github.com/open-mmlab/mmdetection/blob/ca11860f4f3c3ca2ce8340e2686eeaec05b29111/mmdet/core/evaluation/eval_hooks.py#L134

0reactions
kuangxiaoyecommented, Jul 31, 2022

I have same question.The mAP info can not show on the tensorboard.

Read more comments on GitHub >

github_iconTop Results From Across the Web

TensorFlow Data Validation: Checking and analyzing your data
TensorFlow Data Validation identifies any anomalies in the input data by comparing data statistics against a schema. The schema codifies ...
Read more >
Tensorboard Display Validation Data and Training Data in two ...
1 Answer 1 · first you get the actual model parameter tuning with gradient descent, the training step. · secondly you need to...
Read more >
Build a Validation Set With TensorFlow's Keras API - YouTube
In this episode, we'll demonstrate how to use TensorFlow's Keras API to create a validation set on-the-fly during training.
Read more >
Build a Validation Set With TensorFlow's Keras API - deeplizard
There is another way to create a validation set, and it saves a step! If we don't already have a specified validation set...
Read more >
How To Split a TensorFlow Dataset into Train, Validation, and ...
A challenge when developing a machine learning model is to overfit it to the data set. Here is why and when are splits...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found