Inconsistency between paper and code
See original GitHub issueHi, congratulations on your great work and acceptance in CVPR '21. Thanks for releasing the code and model weights.
In the paper, you mention using DeepLabv2 with ResNet101 backbone. However, your code actually makes use of a modified ASPP module (ClassifierModule2
in models/deeplabv2.py
) while actually ClassifierModule
has to be used for DeepLabv2. Similar issues were raised here and here which mention that this type of ASPP module is used in DeepLabv3+ which has a much better performance compared to DeepLabv2 (both issues were raised in Jan. 2020). Could you please confirm this point and if you have also performed experiments with the original DeepLabv2 model, could you report those results for a fair comparison with prior arts?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Inconsistent parameters between paper and code #8 - GitHub
I have read both the paper and your released code, but I found that some parameters ... Inconsistent parameters between paper and code...
Read more >How does code style inconsistency affect pull request ...
In this paper, we conducted an exploratory study on the integration of PRs from 117 projects in GitHub. This study investigated the effect...
Read more >Explaining Inconsistent Code - NYU Computer Science
In this paper, we consider the problem of automatically explaining inconsis- tent code. This problem is difficult because traditional fault localization ...
Read more >Deep Just-In-Time Inconsistency Detection Between ...
In this paper, we aim to detect whether a comment becomes in- consistent as a result of changes to the corresponding body of...
Read more >Inconsistency Detection of Model and Code via Critic-Based ...
In this paper, we propose a critic-based approach to detect the inconsistencies between design model and source code.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think it’s better for you to report results with Deeplabv2(should not be difficult). The mainstream choice of segmentation is Deeplabv2, like SDCA, FADA paper.
And we can see the ablation study in our paper. The conventional self-training is trained by our modified ASPP, the performance is similar to the original ASPP in deeplabv2(45.9 mIoU reported in CRST).