How to get trainable weights?
See original GitHub issueBecause I’m manually running a session, I can’t seem to collect the trainable weights of a specific layer.
# Keras layers can be called on TensorFlow tensors:
x = Convolution2D(16, 3, 3, init='he_normal', border_mode='same')(img)
for i in range(0, self.blocks_per_group):
nb_filters = 16 * self.widening_factor
x = residual_block(x, nb_filters=nb_filters, subsample_factor=1)
for i in range(0, self.blocks_per_group):
nb_filters = 32 * self.widening_factor
if i == 0:
subsample_factor = 2
else:
subsample_factor = 1
x = residual_block(x, nb_filters=nb_filters, subsample_factor=subsample_factor)
for i in range(0, self.blocks_per_group):
nb_filters = 64 * self.widening_factor
if i == 0:
subsample_factor = 2
else:
subsample_factor = 1
x = residual_block(x, nb_filters=nb_filters, subsample_factor=subsample_factor)
x = BatchNormalization(axis=3)(x)
x = Activation('relu')(x)
x = AveragePooling2D(pool_size=(8, 8), strides=None, border_mode='valid')(x)
x = tf.reshape(x, [-1, np.prod(x.get_shape()[1:].as_list())])
# Readout layer
preds = Dense(self.nb_classes, activation='softmax')(x)
loss = tf.reduce_mean(categorical_crossentropy(labels, preds))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
with sess.as_default():
for i in range(10):
batch = self.next_batch(self.batch_num)
_, l = sess.run([optimizer, loss],
feed_dict={img: batch[0], labels: batch[1]})
print(l)
print(type(weights))
I’m trying to get the weights of the last convolution layer.
I tried get_trainable_weights(layer)
and layer.get_weights()
but I did not manage to get anywhere.
The error
AttributeError: 'Tensor' object has no attribute 'trainable_weights'
Issue Analytics
- State:
- Created 7 years ago
- Reactions:3
- Comments:12 (1 by maintainers)
Top Results From Across the Web
TensorFlow 2.0 How to get trainable variables from tf.keras. ...
Layer object like trainable_variables and weights. However, before my forward pass I received an empty list. To make things a little bit ...
Read more >Transfer learning and fine-tuning | TensorFlow Core
Freezing layers: understanding the trainable attribute. Layers & models have three weight attributes: weights is the list of all weights ...
Read more >What does trainable weights mean in neural network?
So, what we do is introduce the Weights (W) randomly and then use some algorithm to find the optimal weights. For example, first...
Read more >How to Calculate Number of Model Parameters for PyTorch ...
Made by Saurav Maheshkar using Weights & Biases. ... If you want just the trainable parameters then use the following snippet.
Read more >How to freeze model parameters?
... to freeze weights of given layers by setting their `trainable` property. ... We'll make sure that the non-trainable bit from Layers will...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@fchollet If I were to follow your guide and integrate with my TensorFlow workflow (https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html) as with others, you cannot access the weight variable because we won’t be building the model as shown in your guide. We’re merely using the layers. There is no need to compile when we use it as a simplified interface to TensorFlow. How then do we access the weights?
Because if we use with TensorFlow like the guide, we do not call
Model
orCompile
but merely use the layers to build.model.trainable_weights
is the list of trainable weights of a model. Of course you should first define a model in that case.You can also retrieve that attribute separately on every layer (
layer.trainable_weights
).