question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why validate on LFW with phase_train_placeholder:True?

See original GitHub issue

This is relative to #56, #72. I check TensorFlow-Slim image classification library Evaluation Code and found setting is_training is to False in the eval_image_classifier.py. ` #################### # Select the model # #################### network_fn = nets_factory.get_network_fn( FLAGS.model_name, num_classes=(dataset.num_classes - FLAGS.labels_offset), is_training=False)

##############################################################
# Create a dataset provider that loads data from the dataset #
##############################################################
provider = slim.dataset_data_provider.DatasetDataProvider(
    dataset,
    shuffle=False,
    common_queue_capacity=2 * FLAGS.batch_size,
    common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
label -= FLAGS.labels_offset

#####################################
# Select the preprocessing function #
#####################################
preprocessing_name = FLAGS.preprocessing_name or FLAGS.model_name
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
    preprocessing_name,
    is_training=False)

`

So I think setting is_training with False is true when embedding for new face. If validate on LFW with is_training is False, accuracy is low 67%. Is it mean this model is not good trainable?

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
LiuzcEECScommented, Nov 17, 2016

I think the decay in moving_average was too high (0.9997) (seems to be fixed yesterday) to record the variance and as a result, the moving_variance kept unchanged during the training. When we validate the model on LFW with phase_train_placeholder:True, tensorflow doesn’t use the moving_variance we saved. Instead, it just calculates mean and variance of the mini-batch by tf.nn.moments. So my solution is to just set the phase_train_placeholder:True or you can write another batch_norm module which use tf.nn.moments when phase_train_placeholder is set to False instead of using the moving_variance we saved as tf.Slim does.

0reactions
davidsandbergcommented, Dec 6, 2016

Closing due to inactivity. Open if needed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Validate on LFW · davidsandberg/facenet Wiki - GitHub
Validate on LFW · 1. Install dependencies · 2. Download the LFW dataset · 3. Set the python path · 4. Align the...
Read more >
Labeled Faces in the Wild Home
Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of...
Read more >
How to calculate LFW accuracy of a face recognition model?
I see that LFW dataset has images of 5749 different people and there is no split of training and testing. Actually LFW offers...
Read more >
LFW Dataset - Papers With Code
The LFW dataset contains 13233 images of faces collected from the web. This dataset consists of the 5749 identities with 1680 people with...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found