question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Differences in Validation results when training in Pascal Context Dataset

See original GitHub issue

Hi Hang Zhang!

First I want to thank you for the amazing repository.

I’m trying to train DeepLabv3 with ResNeSt-101 backbone (DeepLab_ResNeSt101_PContext) for the task of semantic segmentation in Pascal Context Dataset. I’m running the code without any issue, however, I’m still under you results from the pre-trained model that you provide in https://hangzhang.org/PyTorch-Encoding/model_zoo/segmentation.html :

Model Pix Accuracy MIoU
Mine 79.1 % 52.1 %
Yours 81.9 % 56.5 %

I’m using the exact same hyperparameters as you and using the following training command: python train.py --dataset pcontext --model deeplab --aux --backbone resnest101

Is there something that I’m missing for reaching you results? I assume that your model is trained using Auxiliary Loss but not Semantic Encoding Loss. Are you using some pretraining data maybe?

Thanks in advance!

Alex.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
alexlopezcifuentescommented, Nov 9, 2020

Hi!

Unfortunately, my GPU does not have enough memory to fit a batch size of 16, so I’m trying to simulate it by using gradient accumulation. I suppose that is the main problem, I was asking in case I have missed something else.

I do use your testing script (https://hangzhang.org/PyTorch-Encoding/model_zoo/segmentation.html#test-pretrained).

So I assume that the only problem is the batch size which is a problem with nearly no solution…

0reactions
zhanghang1989commented, Nov 17, 2020

For the experiments in the paper, I used AWS EC2 P3.24dn instance with 8x 32GB V100 gpus, but may not be necessary. 16GB per gpu should be enough for most of the experiments.

Read more comments on GitHub >

github_iconTop Results From Across the Web

PASCAL-Context Dataset - Stanford Computer Science
Class Name # of Images Average Area (% of image) empty 0 accordion 2 5.26% aeroplane 597 17.41%
Read more >
Evaluation on PASCAL Context validation. We show results ...
Evaluation on PASCAL Context validation. We show results using a balanced and an unbalanced version of our method, as well as the current...
Read more >
Training vs Testing vs Validation Sets | Towards Data Science
To summarise, the training set is -typically- the largest subset created out of the original dataset that is used to fir the models....
Read more >
The PASCAL Visual Object Classes Homepage
The VOC challenge encourages two types of participation: (i) methods which are trained using only the provided "trainval" (training + validation) data; (ii) ......
Read more >
PASCAL Context Dataset - Papers With Code
The PASCAL Context dataset is an extension of the PASCAL VOC 2010 detection challenge, and it contains pixel-wise labels for all training images....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found