shouldn't model.trainable=False freeze weights under the model?
See original GitHub issueI am trying to freeze the free trained VGG16’s layers (‘conv_base’ below) and add new layers on top of them for feature extracting. I expect to get same prediction results from ‘conv_base’ before(ret1) / after(ret2) fit of model but it is not. Is this wrong way to check weight freezing?
# loading VGG16 and set to untrainable
conv_base = applications.VGG16(weights='imagenet', include_top=False, input_shape=[150, 150, 3])
conv_base.trainable = False
#result before model fit
ret1 = conv_base.predict(np.ones([1, 150, 150, 3]))
# add layers on top of the VGG16 and compile a model
model = models.Sequential()
model .add(conv_base)
model .add(layers.Flatten())
model .add(layers.Dense(10, activation='relu'))
model .add(layers.Dense(1, activation='sigmoid'))
model.compile('rmsprop', 'binary_crossentropy', ['accuracy'])
# fit the model
model.fit_generator(train_generator, 100, validation_data=validation_generator, validation_steps=50)
#result after model fit
ret2 = conv_base.predict(np.ones([1, 150, 150, 3]))
#hope this is True but it is not.
np.equal(ret1, ret2)
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:8 (5 by maintainers)
I fixed it: https://github.com/fchollet/keras/commit/c25fa38deb4efc5445f64af3ec17eae0eb660d2f
if you set model. trainable = False, should it not make layer.trainable for all layers false?
I am still getting true for all layers.
Am I missing something?