NaN mIOU on Test Set
See original GitHub issueHi @CoinCheung
When I change the mode in the evaluation code from val
to test
, the mIOU is returned as NaN.
Do you have an idea what might cause this? Because the code performs well on the val
set and I get the score that is reported in this repo.
Thanks!
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (1 by maintainers)
Top Results From Across the Web
how to handle the Nan values in the test set - Kaggle
This is the categorical variables exercise in the intermediate ml course i'm trying to make one-hot encoding and submit the results : OH_X_test_cols...
Read more >MIoU Calculation. Computation of MIoU for Multiple-Class…
'Nanmean' is preferred than ordinary mean to ignore the cases where individual IoU value may turn out to be 'nan' because of the...
Read more >Paper tables with annotated results for Aerial Imagery Pixel-level ...
Hence, we also propose a new benchmark on the DroneDeploy test set using the best performing DeepLabv3+ Xception65 architecture, with a mIOU score...
Read more >mmseg.apis — MMSegmentation 0.29.1 documentation
Test with single GPU by progressive mode. Parameters. model (nn.Module) – Model to be tested. data_loader (utils.data.Dataloader) ...
Read more >Feature-Enhanced Attention Network for RGB-Thermal ... - arXiv
mIoU ). For the 480 × 640 RGB-T test images, our FEANet can ... Nan from Macau University of Science and Technology for...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @songqi-github Yeah I have solved the issue. The issue is with our understanding of the test dataset. Basically, the ground truths of the testing set are never released because then researchers can literally claim any mIOU score on it. The way how it works is that you need to perform predictions in the testing images (RGB) and them convert all the label_IDs to train_IDs and then submit the predictions to the Cityscapes test server for evaluation. The server will check for true/false positives/negatives and compare your submitted predictions with the actual ground truths of the testing set, and then give you a final result. Hope this helps!
@dronefreak i see! thank you very much. you are really a good man!