Why validate on LFW with phase_train_placeholder:True?
See original GitHub issueThis is relative to #56, #72. I check TensorFlow-Slim image classification library Evaluation Code and found setting is_training is to False in the eval_image_classifier.py. ` #################### # Select the model # #################### network_fn = nets_factory.get_network_fn( FLAGS.model_name, num_classes=(dataset.num_classes - FLAGS.labels_offset), is_training=False)
##############################################################
# Create a dataset provider that loads data from the dataset #
##############################################################
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
shuffle=False,
common_queue_capacity=2 * FLAGS.batch_size,
common_queue_min=FLAGS.batch_size)
[image, label] = provider.get(['image', 'label'])
label -= FLAGS.labels_offset
#####################################
# Select the preprocessing function #
#####################################
preprocessing_name = FLAGS.preprocessing_name or FLAGS.model_name
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
preprocessing_name,
is_training=False)
`
So I think setting is_training with False is true when embedding for new face. If validate on LFW with is_training is False, accuracy is low 67%. Is it mean this model is not good trainable?
Issue Analytics
- State:
- Created 7 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Validate on LFW · davidsandberg/facenet Wiki - GitHub
Validate on LFW · 1. Install dependencies · 2. Download the LFW dataset · 3. Set the python path · 4. Align the...
Read more >Labeled Faces in the Wild Home
Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of...
Read more >How to calculate LFW accuracy of a face recognition model?
I see that LFW dataset has images of 5749 different people and there is no split of training and testing. Actually LFW offers...
Read more >LFW Dataset - Papers With Code
The LFW dataset contains 13233 images of faces collected from the web. This dataset consists of the 5749 identities with 1680 people with...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I think the decay in moving_average was too high (0.9997) (seems to be fixed yesterday) to record the variance and as a result, the moving_variance kept unchanged during the training. When we validate the model on LFW with phase_train_placeholder:True, tensorflow doesn’t use the moving_variance we saved. Instead, it just calculates mean and variance of the mini-batch by
tf.nn.moments
. So my solution is to just set the phase_train_placeholder:True or you can write another batch_norm module which usetf.nn.moments
when phase_train_placeholder is set to False instead of using the moving_variance we saved as tf.Slim does.Closing due to inactivity. Open if needed.