question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

potential leaky information was uesd in eval linemod datasets

See original GitHub issue

In eval_linemod.py, the code still use the rmin, rmax, cmin, cmax = get_bbox(meta['obj_bb']) to get the rmin, rmax, cmin, cmax. That’s the important process for the image crop. In my opinion, gt.yaml is the groundtruth for the objects, and the obj_bb is the 2d bounding box. I don’t know whether the code is right. Maybe I was wrong. thank you =。=

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
hygxycommented, Jul 31, 2019

@j96w So you mean the bbox in gt.yaml is actually generated by segnet_results? if that’s the case why do you use the same bbox for training? I am especially confused about the line (122) in dataset.py. The rgb image is cropped into image_masked using bbox from gt.yaml, both for training and evaluation. IMO they should be treated differently, am I right?

0reactions
flowtcwcommented, Sep 20, 2019

Hi, after adding 2 more times of iterative refinement, the current testing result on Linemod after fixing the bbox bug is:

Object 1 success rate: 0.9285033365109628 Object 2 success rate: 0.944713870029098 Object 4 success rate: 0.9725490196078431 Object 5 success rate: 0.9409448818897638 Object 6 success rate: 0.9650698602794411 Object 8 success rate: 0.8741328047571854 Object 9 success rate: 0.9305164319248826 Object 10 success rate: 0.9971777986829727 Object 11 success rate: 0.9980694980694981 Object 12 success rate: 0.9248334919124643 Object 13 success rate: 0.9816138917262512 Object 14 success rate: 0.9692898272552783 Object 15 success rate: 0.9606147934678194 ALL success rate: 0.9529245001492092

This result is even higher than the reported 94.3%(all success rate) in the paper. So nothing needs to modify on the paper. Thank you all.

I think that’s a surprise gift. 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

arXiv:2004.06468v3 [cs.CV] 3 Aug 2020
To evaluate our proposed method we leverage the commonly used. LineMOD dataset [12], which consists of 15 sequences, Only 13 of these provide....
Read more >
Detecting Object Surface Keypoints From a Single RGB Image ...
and their associated gradient information are used as the ... To evaluate the proposed system, two LINEMOD datasets.
Read more >
Self6D: Self-Supervised Monocular 6D Object Pose Estimation
Extensive evaluations demonstrate that our proposed self-supervision is able to significantly enhance the model's original performance, ...
Read more >
arXiv:1612.05424v1 [cs.CV] 16 Dec 2016
(a) Image examples from the Linemod dataset. ... images and a stochastic noise vector, our model can be used.
Read more >
PrimA6D: Rotational Primitive Reconstruction for Enhanced ...
In this section, we evaluate the proposed method using three different datasets. A. Dataset. In total, three datasets were used for the evaluation:...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found