potential leaky information was uesd in eval linemod datasets
See original GitHub issueIn eval_linemod.py, the code still use the rmin, rmax, cmin, cmax = get_bbox(meta['obj_bb'])
to get the rmin, rmax, cmin, cmax. That’s the important process for the image crop. In my opinion, gt.yaml is the groundtruth for the objects, and the obj_bb is the 2d bounding box. I don’t know whether the code is right. Maybe I was wrong.
thank you =。=
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
arXiv:2004.06468v3 [cs.CV] 3 Aug 2020
To evaluate our proposed method we leverage the commonly used. LineMOD dataset [12], which consists of 15 sequences, Only 13 of these provide....
Read more >Detecting Object Surface Keypoints From a Single RGB Image ...
and their associated gradient information are used as the ... To evaluate the proposed system, two LINEMOD datasets.
Read more >Self6D: Self-Supervised Monocular 6D Object Pose Estimation
Extensive evaluations demonstrate that our proposed self-supervision is able to significantly enhance the model's original performance, ...
Read more >arXiv:1612.05424v1 [cs.CV] 16 Dec 2016
(a) Image examples from the Linemod dataset. ... images and a stochastic noise vector, our model can be used.
Read more >PrimA6D: Rotational Primitive Reconstruction for Enhanced ...
In this section, we evaluate the proposed method using three different datasets. A. Dataset. In total, three datasets were used for the evaluation:...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@j96w So you mean the bbox in gt.yaml is actually generated by segnet_results? if that’s the case why do you use the same bbox for training? I am especially confused about the line (122) in dataset.py. The rgb image is cropped into image_masked using bbox from gt.yaml, both for training and evaluation. IMO they should be treated differently, am I right?
I think that’s a surprise gift. 😃