question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Suggestions on the documentation in part generating traning data for ssd

See original GitHub issue

Thanks a lot for the nice work, but there’s some information missing related to generating training data for ssd, hope followings will save time for others who are also interested in this work:

for generating training data for ssd, instead using “detection_utils/generate_sixd_train.py”, “generate_syn_det_train.py” should be used. And the commanline configuration looks as follows:

python generate_syn_det_train.py \
     --output_path=/path/to/training_data_ssd  \  <==output dir given by user
     --model=/path/to/model_dir/t-less/t-less_v2/models_cad/  \ <==model data dir
     --num=3000 \ <==how many data u want to generate
      --scale=1 \ <==scale factors, In case of a model in meters, use 1000
      --vocpath=/localhome/demo/autoencoder_6d_pose_estimation/backgrounimage/VOCdevkit/VOC2012/JPEGImages  \ <==background pictures location
      --model_type=cad  <==when with textures use 'reconst'

after running, xml annotations and training images will be generated nicely.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:2
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
MartinSmeyercommented, Mar 8, 2019

Thank you! I should add some documentation to it.

Both scripts should work, “generate_syn_det_train.py” creates synthetic object views from the 3D models and “detection_utils/generate_sixd_train.py” uses a SIXD training set (e.g. from T-LESS) with single object views on black background. As stated in the paper, for T-LESS we use the primesense training set. Just change the paths inside detection_utils/generate_sixd_train.py to point to the T-LESS and VOC datasets and you should be fine.

0reactions
MartinSmeyercommented, Mar 27, 2019

Additionally, so both datasets. Also, for training you might need early stopping to avoid overfitting to the synthetic data.

Read more comments on GitHub >

github_iconTop Results From Across the Web

05. Deep dive into SSD training: 3 tips to boost performance
Train SSD on Pascal VOC dataset, we briefly went through the basic APIs that help building the training pipeline of SSD. In this...
Read more >
SSD object detection: Single Shot MultiBox Detector for real ...
At first, we describe how SSD detects objects from a single layer. Actually, it uses multiple layers (multi-scale feature maps) to detect objects...
Read more >
Review: SSD — Single Shot Detector (Object Detection)
5.2 Data Augmentation. Each training image is randomly sampled by: entire original input image; Sample a patch so that the overlap with objects ......
Read more >
How single-shot detector (SSD) works? | ArcGIS API for Python
In practice, there are two types of mainstream object detection algorithms. Algorithms like R-CNN and Fast(er) R-CNN use a two-step approach - first...
Read more >
14.7. Single Shot Multibox Detection - Dive into Deep Learning
In Section 14.3–Section 14.6, we introduced bounding boxes, anchor boxes, multiscale object detection, and the dataset for object detection.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found