Evaluate network in training=True configuration
See original GitHub issueTo get help from the community, we encourage using Stack Overflow and the tensorflow.js
tag.
TensorFlow.js version:
1.7.3
Browser version: Chrome
81.0.4044.113
Describe the problem or feature request:
I have trained a pix2pix network in python, and converted it to a tfjs graph model. In python, I can evaluate the network with the training=True
flag, which is required in order for the batch normalization layers to use the sample mean and variance rather than the mean and variance from the training set. Is it possible to either call the model in a training configuration, or to modify the batchnorm layer to use sample statistics rather than training statistics?
There is a similar issue here: #562
Issue Analytics
- State:
- Created 3 years ago
- Reactions:4
- Comments:19 (2 by maintainers)
Top Results From Across the Web
Training and evaluation with the built-in methods - TensorFlow
This guide covers training, evaluation, and prediction (inference) ... We specify the training configuration (optimizer, loss, metrics):.
Read more >What does `training=True` mean when calling a TensorFlow ...
It says that training is a Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode....
Read more >Writing a training loop from scratch - Keras
Description: Complete guide to writing low-level training & evaluation loops. View in Colab • GitHub source. Setup. import tensorflow as ...
Read more >Evaluate the Performance of Deep Learning Models in Keras
Empirically Evaluate Network Configurations. You must make a myriad of decisions when designing and configuring your deep learning models.
Read more >End-to-end Learning with Autoencoders — Sionna ... - NVlabs
GPU Configuration and Imports ... SNR range for evaluation and training [dB] ... either for training ( training = True ) or evaluation...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The solution is to use the apply method with the
training
kwarg set totrue
. Tested with TensorFlow.js 2.0.const output = model.apply(input, {'training': true});
Now after evaluating the time:
Takes 6-8ms for me, while
Takes 3-4ms to me (50%!)
While it is not a lot, (assuming that model inference takes 40ms~) It can make the difference from 24 to 25fps 😃