How to change regularization parameters during training?
See original GitHub issueHi all,
I am trying to implement flexible regularization scheduler. Instantiate layer like this:
x = Convolution2D(... W_regularizer=l2(10)...)
and later change regularization:
model.layers[1].W_regularizer = l2(0)
I can verify the layer’s settings changed:
model.layers[1].W_regularizer.l2
Out[9]: array(0.0, dtype=float32)
but this has no effect during following training whether I compile model anew or not. Where is a caveat?
Issue Analytics
- State:
- Created 7 years ago
- Reactions:2
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Change keras regularizer during training / dynamic ...
We want modify hyperparameters during training, and the way to do it is use backend variables in the training function and update those ......
Read more >Regularization: Machine Learning
Lambda's purpose is to give a good fit for the training data while limiting the values of the parameters, thereby keeping the hypothesis ......
Read more >Training Parameters - Amazon Machine Learning
For information about the default model size, see Training Parameters: Types and Default Values. For more information about regularization, see Regularization.
Read more >How to Add Regularization to Keras Pre-trained Models the ...
Although in practice this argument may sound right, there is an important catch in here. Even though you may be able to fit...
Read more >How to Use Weight Decay to Reduce Overfitting of Neural ...
Weight Regularization in Keras; Examples of Weight Regularization ... We can see no change in the accuracy on the training dataset and an ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi Alexander,
The hyperparameters are built-in to the training function when you compile. Editing the model after compilation won’t do anything to affect your current training. You will see the same issue if you try to change learning rates or other hyperparameters.
The way to modify hyperparameters during training is to use backend variables in the training function and update those variables during training.
The L1L2Regularizer isn’t using variables but it should be. https://github.com/fchollet/keras/blob/master/keras/regularizers.py Change:
self.l2 = K.cast_to_floatx(l2)
to:self.l2 = K.variable(K.cast_to_floatx(l2))
Instantiate but hold a reference to the regularizer.
During training, update the variable reg.l2
K.set_value(reg.l2, K.cast_to_floatx(100))
Might want to add a pull request to make l1l2 into variables.
Cheers, Ben
Hi,
I’ve been tacking inspiration from this conversation and came up with this solution that works extremely well:
1st: extend the Regularizer with a custom l1l2 regularizer class (do not call it L1L2 as in serialization ,aka when you save and reload your model, shadowing does not work): it should go something like this:
2nd: Add your custom object so that when you might want to export your model you won’t have any issue in reloading it.
3rd: update your variable using the custom object set_l1_l2 method by accessing the object from the model keras model.
Done.
But wait, can’t I access the variables from
tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
? which is btw the same dictionary asmodel.trainable_variables
.Yes you could but I warmly suggest you not to do so:
Why? because at declaration time your variable scope will be depending on whether you’ve defined the L1L2_m within a layer (a convolutional layer for example Conv1). So if you were to look into the graph you’ll find your variables scope looks smth like: Conv1/L1L2/l1 … or Conv10/L1L2/l1 … But the keras deserializer does not works like that and you’ll find all L1L2_1/l1 , L1L2_2/l1 … all grouped together if you save and reload your model (json or h5 format).
Using the object reference method
set_l1_l2
gives the same results every time even with a different graph representation.