Very low .95 mAP
See original GitHub issueWhen training on custom dataset I have very low .95 mAP. Dataset contains 1k images with only one class, every mask about 2-3 bigger than masks in #270 .
Output:
[ 4] 1870 || B: 1.221 | C: 1.279 | M: 1.798 | S: 0.011 | I: 0.075 | T: 4.384 || ETA: 14 days, 15:51:55 || timer: 0.533
| all | .50 | .55 | .60 | .65 | .70 | .75 | .80 | .85 | .90 | .95 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
box | 58.71 | 96.89 | 94.81 | 93.38 | 87.18 | 82.01 | 71.20 | 44.11 | 15.30 | 2.17 | 0.02 |
mask | 66.74 | 96.83 | 95.80 | 94.80 | 93.55 | 90.20 | 81.55 | 69.25 | 39.94 | 5.52 | 0.01 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
[148] 55870 || B: 0.205 | C: 0.350 | M: 0.775 | S: 0.004 | I: 0.020 | T: 1.425 || ETA: 14 days, 9:46:30 || timer: 0.550
| all | .50 | .55 | .60 | .65 | .70 | .75 | .80 | .85 | .90 | .95 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
box | 83.06 | 98.93 | 98.93 | 98.93 | 98.93 | 98.64 | 97.27 | 93.84 | 82.90 | 56.11 | 6.11 |
mask | 81.25 | 98.73 | 98.73 | 98.73 | 98.73 | 97.63 | 96.36 | 92.71 | 80.47 | 49.64 | 0.77 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
[348] 130870 || B: 0.226 | C: 0.338 | M: 0.780 | S: 0.004 | I: 0.028 | T: 1.375 || ETA: 13 days, 17:12:51 || timer: 0.520
| all | .50 | .55 | .60 | .65 | .70 | .75 | .80 | .85 | .90 | .95 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
box | 83.43 | 98.93 | 98.93 | 98.93 | 98.67 | 97.66 | 97.66 | 95.17 | 84.29 | 55.45 | 8.59 |
mask | 82.08 | 98.73 | 98.73 | 98.73 | 98.73 | 98.59 | 97.55 | 93.88 | 81.99 | 52.20 | 0.71 |
-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
After ~50k iterations Mask Loss is jumping (up/down) beetwen 0.7 - 0.9 I tried yolact/yolact++ with 550/700px size on default config. I thought, may be it’s because of mask size, but sizes in #270 was even smaller and results much better. Results with MaskRCNN also is much higher.
What can I try to do to make .95 mAP higher?
Issue Analytics
- State:
- Created 4 years ago
- Reactions:2
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Interstate 95
With a length of 1,908 miles (3,071 km), I-95 is the longest north–south Interstate and the sixth-longest Interstate Highway overall. I-95 passes through...
Read more >What does the notation mAP@[.5:.95] mean?
95 ] ) means average mAP over different IoU thresholds, from 0.5 to 0.95, step 0.05 (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8,...
Read more >mAP (mean Average Precision) for Object Detection
AP (Average precision) is a popular metric in measuring the accuracy of object detectors like Faster R-CNN, SSD, etc. Average precision computes the...
Read more >Google Maps will start showing you slower routes. Here's ...
Then it points out that the green route has 8% lower CO2 emissions ... From a high level, the new Google Maps design...
Read more >mAP (mean Average Precision) might confuse you!
In computer vision, mAP is a popular evaluation metric used for object detection (i.e. localisation and classification).
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@enhany When you’re testing Mask R-CNN’s AP@95 how are you testing it? The COCO eval suite typically doesn’t report AP@95.
It’s just weird because there’s that sudden dip at 95 compared to 90, and on COCO YOLACT actually gets a higher AP@95 than Mask R-CNN. Maybe it’s because we crop with boxes? Do the predicted masks seem to be cut off by boxes often? If so, maybe try increasing the size of the GT boxes.
@scaledinferance, those are loss numbers during training. The goal is for those numbers to go to 0, which would mean the model does perfectly on your training data. The letters are short for Box, Class, Mask, Semantic Segmentation, and Total.
@dbolya what is the number of images required for a custom dataset. and max_iter is 800000 , can I reduce it to save time,
or when should i stop the training for a reasonable good model. thanks ~