Slow RANSAC registration on KITTI 20cm resolution
See original GitHub issueMain Issue
First off, thank you for open sourcing this code! It’s readable and very helpful; I loved the paper and found the results to be very exciting!
I was able to kick off an evaluation of the KITTI results using the model “ResUNetBN2C, Normalization=False, KITTI, 20cm, 32-dim”. Since the required config.json
was not available, I reverse engineered one myself, which may be at least part of the issue I am having.
I am running the test_kitti.py
script (modified to use an updated Open3d; I can provide a pull request soon!) using the aforementioned model and the config posted below.
The script works, and starts evaluating on 6.8k samples. The preliminary numbers look good, but evaluation is very slow. The feature computation time is ~400ms / sample, but the mean RANSAC time sits at about 40 seconds / sample. This seems very large considering it’s saturating my 24-core Intel Xeon E5-2687W at ~99% for the entire duration.
Sample script output so far:
01/14 12:03:28 40 / 6857: Data time: 0.0050524711608886715
Feat time: 0.4292876958847046, Reg time: 36.63782391548157,
Loss: 0.041788674890995026, RTE: 0.03362400613997767, RRE: 0.001606260281446482, Success: 41.0 / 41 (100.0 %)
With this run time, evaluating all 6.8k test samples found by the script would take ~50 hours, which seems a lot.
I noticed that the RANSACConvergenceCriteria
are the key knob to tune. Setting max_validations
, the second argument, to something like 25 (instead of 10k) makes registration run in ~1s on my machine, but seems to deteriorate the RTE to ~11–12cm instead of 5–6cm. The success rate seems to remain unchanged.
My questions:
- Is it normal for registration to take >30s on a KITTI frame pair at 20cm/voxel?
- Would a pull request updating the
test_kitti.py
script to run with the latest Open3d (and maybe some extra comments I added while learning about it) be useful?
Thank you, Andrei
Appendix
My system:
- Ubuntu 18.04
- GTX 1080, CUDA 10.1, PyTorch
- PyTorch v1.2
- ME v0.3.3
- Python 3.7 inside Anaconda
The config.json
I “reverse engineered” to evaluate on KITTI:
{
"out_dir": "outputs/01_kitti_dummy_pretrained/",
"trainer": "HardestContrastiveLossTrainer",
"save_freq_epoch": 1,
"batch_size": 4,
"val_batch_size": 1,
"use_hard_negative": true,
"hard_negative_sample_ratio": 0.05,
"hard_negative_max_num": 3000,
"num_pos_per_batch": 1024,
"num_hn_samples_per_batch": 256,
"neg_thresh": 1.4,
"pos_thresh": 0.1,
"neg_weight": 1,
"use_random_scale": false,
"min_scale": 0.8,
"max_scale": 1.2,
"use_random_rotation": false,
"rotation_range": 360,
"train_phase": "train",
"val_phase": "val",
"test_phase": "test",
"stat_freq": 40,
"test_valid": true,
"val_max_iter": 400,
"val_epoch_freq": 1,
"positive_pair_search_voxel_size_multiplier": 1.5,
"hit_ratio_thresh": 0.1,
"triplet_num_pos": 256,
"triplet_num_hn": 512,
"triplet_num_rand": 1024,
"model": "ResUNetBN2C",
"model_n_out": 32,
"conv1_kernel_size": 7,
"normalize_feature": false,
"dist_type": "L2",
"best_val_metric": "feat_match_ratio",
"optimizer": "SGD",
"max_epoch": 100,
"lr": 0.1,
"momentum": 0.8,
"sgd_momentum": 0.9,
"sgd_dampening": 0.1,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"weight_decay": 0.0001,
"iter_size": 1,
"bn_momentum": 0.05,
"exp_gamma": 0.99,
"scheduler": "ExpLR",
"icp_cache_path": "/home/andreib/.cache/fcgf_icp_cache_path",
"use_gpu": true,
"weights": null,
"weights_dir": null,
"resume": null,
"resume_dir": null,
"train_num_thread": 2,
"val_num_thread": 1,
"test_num_thread": 2,
"fast_validation": false,
"nn_max_n": 500,
"dataset": "KITTIPairDataset",
"voxel_size": 0.20,
"threed_match_dir": "/home/chrischoy/datasets/FCGF/threedmatch",
"kitti_root": "<my kitti root>",
"kitti_max_time_diff": 3,
"kitti_date": "2011_09_26"
}
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (1 by maintainers)
Top GitHub Comments
I didn’t have a chance to put it together… 😕 Sorry about that But I think Chris updated the evaluation scripts in this repo eventually.
Also, I think Deep Global Registration repo is slightly newer code. As far as naming goes, if it’s just a minor namespace change, it seems it could be fixed with a search & replace or even manually, right?
@AndreiBarsan
Did you finally submit the PR? Running on latest open3d 0.11, all packages previously in
o3d.registration
are moved too3d.pipelines.registration
, thus breaking all existing code… Is this what you were referring to? How do we usually solve this sort of backward non-compatibility of dependent package in python?