Why do centers get updated?
See original GitHub issueHi, I don’t quite understand why the centers could get updated in David’s implementation. Could anyone please explain the mechanism behind it? My question is:
In facenet.center_loss(…) implementation, the centers are updated by a scatter_sub OP and the resulting tensor is returned:
centers = tf.scatter_sub(centers, label, diff)
loss = tf.nn.l2_loss(features - centers_batch)
return loss, centers
However, the returned tensor(centers) is not used later in train_softmax.py:
prelogits_center_loss, _ = facenet.center_loss(prelogits, ...)
So why the centers could get updated? Thanks.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:2
- Comments:9 (1 by maintainers)
Top Results From Across the Web
CDC Updates and Shortens Recommended Isolation and ...
CDC's updated recommendations for isolation and quarantine balance what we know about the spread of the virus and the protection provided by ......
Read more >Check for Test Center Closings – SAT Suite | College Board
Test center data is updated every three hours—be sure to search any closings the night before and morning of your scheduled test.
Read more >CMS: Home - Centers for Medicare & Medicaid Services
Get email updates. Sign up to get the latest information about your choice of CMS topics. You can decide how often to receive...
Read more >Check & update your Android version - Google Support
You can find your device's Android version number, security update level, and Google Play system level in your Settings app. You'll get notifications...
Read more >Veterans Affairs: VA.gov Home
Records. Apply for a printed Veteran ID card, get your VA benefit letters and medical records, and learn how to apply for a...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @ugtony, @JianbangZ, Thanks for spotting this! I guess the running of the
center_op
got dropped at some point. I added a control dependency in the center loss function on a local branch:I 'm currently training on casia with wd=2e-4 and cl=2e-2 and maybe the performance is slightly better after 37k iterations but it’s too early to tell. Please also note that the scaling of the center loss was changed some time ago, so the center loss factor should be multiplied by the batch size.
I just finished my training with center_loss_factor=2e-2 and center_loss_alpha=0.95, the performance is pretty similar to before. Another training with center_loss_alpha=0.5 is still running, but the performance isn’t better so far. Still cannot reproduce the results of the paper. 😦