question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Embeddings classification ability

See original GitHub issue

Hello, i wanted to create a comparison between different face recognition techniques in a classification context. For this i reused the classification experiment from Openface (https://github.com/cmusatyalab/openface/blob/master/evaluation/lfw-classification.py) and integrated Facenet into it. The results are however not as i would have thought. Even at 10 different people accuracy is only around 0.5. Are the resulting image embeddings different to the one from Openface in their ability to be used directly in classification, using a classifier like SVM?

At the moment im feeding a single image to the network like this and use the resulting embedding as the representation for classification/evaluation:

imgs = np.reshape(img, (1, 160, 160, 3)) # img is (160, 160, 3) feed_dict = { images_placeholder:imgs } emb = sess.run(embeddings, feed_dict=feed_dict) rep = emb[0]

Or is the error perhaps in the evaluation of the results? Currently using the same accuracy calculation like Openface, the accuracy_score function from sklearn.metrics.

regards

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:17 (3 by maintainers)

github_iconTop GitHub Comments

6reactions
lodemocommented, Feb 9, 2017

You can try running my version of the script, see if anything changes. https://gist.github.com/lodemo/f49ac4a7402d2de3163cf5adfad79d43

I split most methods for Openface and Facenet, a little redundant but works for now.

6reactions
lodemocommented, Jan 27, 2017

I did further look into it and replaced the data reading with the routine used in Facenet. I also missed prewhitening completely which probably was the cause of it, or scipy misc.imread() does return different data than cv2.imread()…

After this the results did improve significantly, even more than i expected, compared to the Openface results.

accuracies

What would you say is the cause for such a difference to Openface, the different loss function? as the embedding dimensions are the same.

Read more comments on GitHub >

github_iconTop Results From Across the Web

BERT Embedding for Classification | by Deepak Saini - Medium
Contextual embedding (e.g. ELMo, BERT), aims to learn a continuous (vector) representation for each word in the documents. Continuous ...
Read more >
Text Classification Using Word Embeddings and Deep ...
The purpose of this article is to help a reader understand how to leverage word embeddings and deep learning when creating a text...
Read more >
Text Classification Demystified: An Introduction to Word ...
This post is about word embeddings, which is the first part of my ... Great paper about text classification created by co-founders of...
Read more >
Getting started with NLP: Word Embeddings, GloVe and Text ...
Word embeddings are in fact a class of techniques where individual words are represented as real-valued vectors in a predefined vector space.
Read more >
Using Word Embeddings for Text Classification in Positive and ...
Using Word Embeddings for Text Classification in ... the better a machine learning algorithm will be able to recognize patterns.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found