Low mAP on pytorch 1.0 branch
See original GitHub issueI downloaded the trained model from here.
model | #GPUs | batch size | lr | lr_decay | max_epoch | time/epoch | mem/GPU | mAP |
---|---|---|---|---|---|---|---|---|
VGG-16 | 1 | 1 | 1e-3 | 5 | 6 | 0.76 hr | 3265MB | 70.1 |
I use pytorch 1.0, and the mAP on pascal voc 2007 is 65.8, which is lower than 70.1. Here is the APs for each category.
VOC07 metric? Yes
AP for aeroplane = 0.6559
AP for bicycle = 0.7750
AP for bird = 0.6105
AP for boat = 0.4600
AP for bottle = 0.4542
AP for bus = 0.7689
AP for car = 0.7686
AP for cat = 0.8393
AP for chair = 0.4700
AP for cow = 0.6777
AP for diningtable = 0.6060
AP for dog = 0.7668
AP for horse = 0.8110
AP for motorbike = 0.6922
AP for person = 0.7295
AP for pottedplant = 0.3714
AP for sheep = 0.6060
AP for sofa = 0.6509
AP for train = 0.7408
AP for tvmonitor = 0.7092
Mean AP = 0.6582
I also tested on the master branch using pytorch 0.4.0. The results are as follows:
VOC07 metric? Yes
AP for aeroplane = 0.7055
AP for bicycle = 0.7761
AP for bird = 0.6638
AP for boat = 0.5457
AP for bottle = 0.5221
AP for bus = 0.8086
AP for car = 0.8450
AP for cat = 0.8411
AP for chair = 0.5025
AP for cow = 0.7824
AP for diningtable = 0.6539
AP for dog = 0.7754
AP for horse = 0.8294
AP for motorbike = 0.7267
AP for person = 0.7726
AP for pottedplant = 0.4223
AP for sheep = 0.7188
AP for sofa = 0.6555
AP for train = 0.7526
AP for tvmonitor = 0.7277
Mean AP = 0.7014
Is there something wrong about the evaluation code?
Issue Analytics
- State:
- Created 5 years ago
- Comments:16
Top Results From Across the Web
PyTorch 2.0
TorchInductor uses a pythonic define-by-run loop level IR to automatically map PyTorch models into generated Triton code on GPUs and C++/OpenMP on CPUs....
Read more >How to normalize images in PyTorch ? - GeeksforGeeks
When an image is transformed into a PyTorch tensor, the pixel values are scaled between 0.0 and 1.0. In PyTorch, this transformation can...
Read more >A Two-Branch Neural Network for Non-Homogeneous ...
Thus, learning the mapping from the domain of hazy im- ... Single image dehazing as a low-level vision task has ... using the...
Read more >Models - Hugging Face
Instantiate a pretrained pytorch model from a pre-trained model configuration. ... A torch module mapping vocabulary to hidden states.
Read more >8.6. Residual Networks (ResNet) and ResNeXt
In fact, the residual block can be thought of as a special case of the multi-branch Inception block: it has two branches one...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi,
For the PyTorch-1.0 branch, I re-ran the code and it gives me 70.26% mAP when using VGG-16 and 74.90% using ResNet-101. These numbers are achieved using the default parameter values on a single GPU. These numbers are consistent with the ones reported by @jwyang in the readme. Here are the details:
VGG-16 (7th epoch)
ResNet-101 (7th epoch)
Thank you very much, @adityaarun1. I saw you reply at https://github.com/ruotianluo/pytorch-faster-rcnn/pull/122. I trained a model from scratch and got 70.26% mAP. It seems that it is important to retrain the model in pytorch 1.0. Thanks again, you have helped me a lot.