question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

COCO OKS Metrics Usage

See original GitHub issue

Hi, I am unable to understand how OKS is calculated in experiments using COCO dataset. In the train function in lib/core/function.py you seem to call accuracy from the file lib/core/evaluate.py. But that accuracy is PCKh right? So how do you calculate OKS.

Could you please explain the steps how can I calculate OKS given I use your dataloader?? Thanks alot in advance!!!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
leoxiaobincommented, Sep 23, 2018

We use a simple PCKh metric to evaluate our training procedure, and we only use OKS for validation procedure, you can look into the code at https://github.com/Microsoft/human-pose-estimation.pytorch/blob/d69ed56bdbc1f16a288921e302c87fcb33554e37/lib/dataset/coco.py#L273.

0reactions
YuQi9797commented, Jul 12, 2021

Is there a handwritten version of OKS here? Instead of calling the API .

Read more comments on GitHub >

github_iconTop Results From Across the Web

COCO OKS Metrics Usage · Issue #27 - GitHub
Hi, I am unable to understand how OKS is calculated in experiments using COCO dataset. In the train function in lib/core/function.py you ...
Read more >
COCO - Common Objects in Context
COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several features: Object segmentation. Recognition in context.
Read more >
Evaluation metrics on the COCO dataset. - ResearchGate
In the matching between predictions to groundtruth, a matching criterion called object keypoint similarity (OKS) is defined to compute the overlapping ratio ...
Read more >
Keypoint estimation - SVIRO
The evaluation is performed according to the COCO evaluation metric. We use the average precision (AP) which is averaged over different object keypoint ......
Read more >
Source code for detectron2.evaluation.coco_evaluation
The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means the metric cannot be computed...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found