Embeddings classification ability
See original GitHub issueHello, i wanted to create a comparison between different face recognition techniques in a classification context. For this i reused the classification experiment from Openface (https://github.com/cmusatyalab/openface/blob/master/evaluation/lfw-classification.py) and integrated Facenet into it. The results are however not as i would have thought. Even at 10 different people accuracy is only around 0.5. Are the resulting image embeddings different to the one from Openface in their ability to be used directly in classification, using a classifier like SVM?
At the moment im feeding a single image to the network like this and use the resulting embedding as the representation for classification/evaluation:
imgs = np.reshape(img, (1, 160, 160, 3)) # img is (160, 160, 3)
feed_dict = { images_placeholder:imgs }
emb = sess.run(embeddings, feed_dict=feed_dict)
rep = emb[0]
Or is the error perhaps in the evaluation of the results? Currently using the same accuracy calculation like Openface, the accuracy_score function from sklearn.metrics.
regards
Issue Analytics
- State:
- Created 7 years ago
- Comments:17 (3 by maintainers)
Top GitHub Comments
You can try running my version of the script, see if anything changes. https://gist.github.com/lodemo/f49ac4a7402d2de3163cf5adfad79d43
I split most methods for Openface and Facenet, a little redundant but works for now.
I did further look into it and replaced the data reading with the routine used in Facenet. I also missed prewhitening completely which probably was the cause of it, or scipy misc.imread() does return different data than cv2.imread()…
After this the results did improve significantly, even more than i expected, compared to the Openface results.
What would you say is the cause for such a difference to Openface, the different loss function? as the embedding dimensions are the same.