Why I got the map value -1
See original GitHub issue❓ Questions and Help
After training on coco2017
dataset, I got the following output:
2018-10-26 07:57:49,972 maskrcnn_benchmark.inference INFO: Total inference time: 0:30:09.844855 (0.08900146815007108 s / img per device, on 2 devices)
2018-10-26 07:57:57,928 maskrcnn_benchmark.inference INFO: Preparing results for COCO format
2018-10-26 07:57:57,928 maskrcnn_benchmark.inference INFO: Preparing bbox results
2018-10-26 07:58:06,302 maskrcnn_benchmark.inference INFO: Evaluating predictions
Loading and preparing results...
DONE (t=7.88s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=84.73s).
Accumulating evaluation results...
DONE (t=22.14s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
2018-10-26 08:00:20,193 maskrcnn_benchmark.inference INFO: OrderedDict([('bbox', OrderedDict([('AP', -1.0), ('AP50', -1.0), ('AP75', -1.0), ('APs', -1.0), ('APm', -1.0), ('APl', -1.0)]))])
I used the file e2e_faster_rcnn_R_50_FPN_1x.yaml
, and modified the lr
to 0.01.
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Most efficient way to increment a Map value in Java
Say I'm creating a word frequency list, using a Map (probably a HashMap), where each key is a String with the word that's...
Read more >Incrementing a Map's value in Java - Techie Delight
A simple solution is to check if the map contains the mapping for the specified key or not. If the mapping is not...
Read more >Map get() method in Java with Examples - GeeksforGeeks
The get() method of Map interface in Java is used to retrieve or fetch the value mapped by a particular key mentioned in...
Read more >Map.prototype.values() - JavaScript - MDN Web Docs - Mozilla
The values() method returns a new iterator object that contains the values for each element in the Map object in insertion order.
Read more >Map — Elixir v1.14.2 - HexDocs
Some functions, such as keys/1 and values/1 , run in linear time because they need to get to every element in the map....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi,
In order to reproduce the results on fewer GPUs than 8, you’ll need indeed to change the learning rate (which is good in your case), but also the number of iterations should be increased from the default by a factor of 4x, as well as the learning rate schedules. So you should have
90000 * 4= 360000
iterations, and you need to change the lr schedules to be[240000, 320000]
.Check the README on the
single GPU training
sections for more informations.I’m closing the issue ad it doesn’t seem to be a bug, but please let me know if you have other questions
I followed you advice and got the following results on coco2017 val dataset:
Thanks!