Low mIoU on Cityscapes
See original GitHub issueHi, thanks for sharing your code. I trained the model on the cityscapes dataset (btw, you missed a self
here) without code edits and I can get only 68% mIoU. Do you have any pretrained models, or can you describe your training strategy? I trained on a single Tesla v100 GPU, with lr = 0.007 and batch size 8.
Thanks in advance
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:21 (7 by maintainers)
Top Results From Across the Web
Cityscapes test Benchmark (Semantic Segmentation)
Rank Model Mean IoU (class) Year Tags
2 HRNetV2 + OCR + 84.5% 2019 hrnet
3 Lawin+ 84.4% 2022 Transformer
4 EfficientPS 84.21% 2020
Read more >mIoU on the different classes of the Cityscapes validation ...
mIoU on the different classes of the Cityscapes validation set. ... testing on Cityscapes without any adaptation, leads to a very low mIoU...
Read more >Benchmark Suite
Using low-level computer vision techniques, we obtain pixel-level and ... Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test ...
Read more >Failure Detection for Semantic Segmentation on Road ...
Furthermore, we design a deep neural network for predicting mIoU of ... the Cityscapes dataset, the learning rate was reduced by half every...
Read more >arXiv:2009.05205v2 [cs.CV] 24 Jul 2021
labeled Cityscapes by a considerable margin and increases the mIoU on the ADE20K dataset to 47.50. 1. Introduction.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I found the problem for my case probably caused by the function of SynchronizedBN. When I turn of the SynBN and use the nn.BatchNorm2d, I have the highest performance 72% in val dataset without coco pretrained. However, it is weird to note that the miou descends in the middle of training. By the way, my setting is bs=4, base_size=796, crop_size=796, backbone=resnet lr=0.01 1GPU(v100). The current miou is still far away from the state-of-the-art performance, maybe there are other problems.
Update on the results: very low mIoU (around 0.6). I think there is some additional problem with the code, because I trained also on another dataset (bdd100k) and the results are equally bad. Or maybe my training parameters are wrong, but they are similar to yours…