Detection not same as with python with same model and picture
See original GitHub issueWhen using the python object detection script that comes with the object detection api. (slightly altered to fetch different images and my own trained model) and when using the example program for object detection with the same input images and the same model and the same label map, then there is a major discrepency. I can give you a json I am creating with the python script and some debug info I create with the c# program for the same image:
python:
[
{
"image_path": "./100percent.jpg"
},
{
"boundingBoxes": [
[
{
"xmin": 229,
"ymin": 104,
"xmax": 320,
"ymax": 529
},
{
"probability": 0.9999961853027344,
"detection_class": 1,
"label": "Feature 1"
}
],
[
{
"xmin": 409,
"ymin": 134,
"xmax": 448,
"ymax": 330
},
{
"probability": 0.9997336268424988,
"detection_class": 2,
"label": "Feature 2"
}
],
[
{
"xmin": 340,
"ymin": 96,
"xmax": 481,
"ymax": 647
},
{
"probability": 0.9950695037841797,
"detection_class": 3,
"label": "Feature 3"
}
]
]
},
{
"shape": {
"dimx": 820,
"dimy": 700
}
}
]
Here is the result from TensorflowSharp. Only one feature was detected with a different confidence.
Processing picture: ./100percent.jpg
Class: Feature 1[1]
xmin: 0,147357 ymin: 0,2830129 xmax: 0,7979434 ymax: 0,402808
Confidence: 95,5%
The overall detection rate is about 10% of that using python, with most pictures getting no detection at all, and some pictures getting just one out of three features. Also the bounding boxes vary greatly between what is detected by both python and c#. I suspect the decoding of the jpg returns different arrays, but this needs more testing. For python I use cv2 to load the image into a numpy array, but I get the same results with tensorflow’s decode jpg function. for c# i use basically use what is given in the example.
Unfortunately I do not have time to do much more than to write an issue here and have to make a workaround with python for now.
As a note, this can also be seen in the test image. only 2 kites are detected, while python would detect nearly all of them, along with the people.
Issue Analytics
- State:
- Created 6 years ago
- Comments:13 (1 by maintainers)
@FalcoGer I Have two Result with python My Model Work Well . with TensorflowSharp my model Is not Goood . Help Me . Is TensorflowSharp Ready or …??
I am having the same problem with a custom network as well. Classifications are more or less random when using the C# code.
To add a variable, my model was stored in a
hdf5
file, due to keras. Could the conversion process to a tensorflowpb
format have broken the model?What was your workflow?
ps. would this issue better be open, due to the number of people experiencing the same problem?