Bad results(Not bad now)
See original GitHub issueAlthough the converted weights produce plausible predictions, they are not yet up to the published results of the PSPNet paper.
Current results on cityscapes validation set:
classes IoU nIoU
--------------------------------
road : 0.969 nan
sidewalk : 0.776 nan
building : 0.871 nan
wall : 0.532 nan
fence : 0.464 nan
pole : 0.302 nan
traffic light : 0.375 nan
traffic sign : 0.567 nan
vegetation : 0.872 nan
terrain : 0.591 nan
sky : 0.905 nan
person : 0.585 0.352
rider : 0.253 0.147
car : 0.897 0.698
truck : 0.606 0.284
bus : 0.721 0.375
train : 0.652 0.388
motorcycle : 0.344 0.147
bicycle : 0.618 0.348
--------------------------------
Score Average : 0.626 0.342
--------------------------------
categories IoU nIoU
--------------------------------
flat : 0.974 nan
nature : 0.876 nan
object : 0.397 nan
sky : 0.905 nan
construction : 0.872 nan
human : 0.603 0.376
vehicle : 0.879 0.676
--------------------------------
Score Average : 0.787 0.526
--------------------------------
Accuracy of the published code on several validation/testing sets according to the author:
PSPNet50 on ADE20K valset (mIoU/pAcc): 41.68/80.04
PSPNet101 on VOC2012 testset (mIoU): 85.41 (multiscale evaluation!)
PSPNet101 on cityscapes valset (mIoU/pAcc): 79.70/96.38
So we are still missing 79.70 - 62.60 = 17.10 % IoU
Does anyone have an idea where we lose that accuracy?
Issue Analytics
- State:
- Created 6 years ago
- Comments:24 (15 by maintainers)
Top Results From Across the Web
NAEP Scores Are Out. Sure, Results Are Bad. But Now's Not ...
NAEP released national, state & district-level scores for grades 4 & 8 in math & reading. Results are unsurprising.
Read more >Not All Bad Outcomes Are The Result of Bad Decisions ...
You've made a good decision and had a good outcome. The decision was not very hard and the outcome fairly expected. Bad decision,...
Read more >Do doctors call right away with bad test results? - Quora
Bad could encompass a broad spectrum of medical issues that are not immediately life threatening verses test results that reveal critical life threatening ......
Read more >How to Get Over a Bad Grade: 14 Steps (with Pictures)
1. Remind yourself that one bad grade won't break your academic career. Your academic career is made up of lots of different tests,...
Read more >6 Smart Tips for Dealing With a Bad Grade | CollegeXpress
Getting a bad grade can be stressful, but it doesn't have to be the end of the world! Follow these tips to get...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@wtliao Unfortunately, I did not (yet) train these weights myself. Sliced/sliding prediction has so much more details because the weights were trained on 714x714 crops from the full resolution image and the 714x714 crops are used for prediction instead of a downsampled 512x256 image that is then upsampled to full resolution. Flipped evaluation means predicting on the image and a vertically flipped image at the same time and averaging the results.
Hello,
I would also like to know how to get the evaluation results. I tried using the cityscapes scripts, but I am getting this error when using seg_read images as my input:
Traceback (most recent call last): File “evalPixelLevelSemanticLabeling.py”, line 696, in main() File “evalPixelLevelSemanticLabeling.py”, line 690, in main evaluateImgLists(predictionImgList, groundTruthImgList, args) File “evalPixelLevelSemanticLabeling.py”, line 478, in evaluateImgLists nbPixels += evaluatePair(predictionImgFileName, groundTruthImgFileName, confMatrix, instStats, perImageStats, args) File “evalPixelLevelSemanticLabeling.py”, line 605, in evaluatePair confMatrix[gt_id][pred_id] += c IndexError: index 34 is out of bounds for axis 0 with size 34