Reproducing model zoo results on GQA
See original GitHub issueHello,
I am trying to reproduce the results reported on model zoo for GQA dataset. I am doing Train+val -> Test-dev with mcan_large on GQA dataset with the following code:
python3 run.py --RUN='train' --SPLIT='train+val' --MODEL='mcan_large' --DATASET='gqa' --GPU='7' --VERSION=''default_frcn+bbox+grid
The accuracy on local evaluation is 56.23% and from GQA evaluation server is 56.57% (with in reasonable limit I guess). But the reported accuracy in model zoo for MCAN-large (frcn+bbox+grid) is 58.10%. I think this is a significant difference in accuracy. Could you please tell me if I am doing something wrong? I have used all provided features as is, and did not modify the code.
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Benchmark and Model Zoo - OpenVQA Documentation
We provide a group of results (including Accuracy, Binary, Open, Validity, Plausibility, Consistency, Distribution) for each model on GQA as follows. Train+val ...
Read more >Oscar/MODEL_ZOO.md at master · microsoft/Oscar - GitHub
Large model is trained on train+val split and evaluated on the val split, for reproduce the paper's best result. Training logs: eval_logs.json, output.txt....
Read more >Model Zoo - Deep learning code and pretrained models for ...
ModelZoo curates and provides a platform for deep learning researchers to easily find code and pre-trained models for a variety of platforms and...
Read more >Reproducing Galaxy Morphologies Via Machine Learning
We present morphological classifications obtained using machine learning for objects in SDSS DR6 that have been classified by Galaxy Zoo into ...
Read more >Overview — TAO Toolkit 4.0 documentation
TAO Toolkit provides an extensive model zoo containing pretrained models for both computer vision and conversational AI use cases.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

We use SEED=1016 for mcan_small in GQA.
If possible, you could paste log information here.