setWeights fails with Layer weight shape not compatible error
See original GitHub issueTensorFlow.js version
1.2.2
Node version
v11.13.0
Describe the problem or feature request
Attempting to copy weights from one network to another fails with the following error: (node:16764) UnhandledPromiseRejectionWarning: Error: Layer weight shape 128 not compatible with provided weight shape 3,3,128,256
Code to reproduce the bug / link to feature request
Encountered this error while attempting to run the example snake-dqn app. Error is thrown from the following method in dqn.js:
export function copyWeights(destNetwork, srcNetwork) { destNetwork.setWeights(srcNetwork.getWeights()); }
Traced the error to tfjs-layers/src/engine/topology.js in the setWeights method.
It appears that the list of layers returned by getWeights() is in a different order than those returned by this.weights:
weights contains:
0 -> 3,3,2,128 conv2d_Conv2D1/kernel
1 -> 128 conv2d_Conv2D1/bias
2 -> 128 batch_normalization_BatchNormalization1/gamma
3 -> 128 batch_normalization_BatchNormalization1/beta
4 -> 3,3,128,256 conv2d_Conv2D2/kernel
5 -> 256 conv2d_Conv2D2/bias
6 -> 256 batch_normalization_BatchNormalization2/gamma
7 -> 256 batch_normalization_BatchNormalization2/beta
8 -> 3,3,256,256 conv2d_Conv2D3/kernel
9 -> 256 conv2d_Conv2D3/bias
10 -> 2304,100 dense_Dense1/kernel
11 -> 100 dense_Dense1/bias
12 -> 100,3 dense_Dense2/kernel
13 -> 3 dense_Dense2/bias
14 -> 128 batch_normalization_BatchNormalization1/moving_mean
15 -> 128 batch_normalization_BatchNormalization1/moving_variance
16 -> 256 batch_normalization_BatchNormalization2/moving_mean
17 -> 256 batch_normalization_BatchNormalization2/moving_variance
paramValues contains:
0 -> 3,3,2,128 conv2d_Conv2D4/kernel
1 -> 128 conv2d_Conv2D4/bias
2 -> 128 batch_normalization_BatchNormalization3/gamma
3 -> 128 batch_normalization_BatchNormalization3/beta
4 -> 128 batch_normalization_BatchNormalization3/moving_mean
5 -> 128 batch_normalization_BatchNormalization3/moving_variance
6 -> 3,3,128,256 conv2d_Conv2D5/kernel
7 -> 256 conv2d_Conv2D5/bias
8 -> 256 batch_normalization_BatchNormalization4/gamma
9 -> 256 batch_normalization_BatchNormalization4/beta
10 -> 256 batch_normalization_BatchNormalization4/moving_mean
11 -> 256 batch_normalization_BatchNormalization4/moving_variance
12 -> 3,3,256,256 conv2d_Conv2D6/kernel
13 -> 256 conv2d_Conv2D6/bias
14 -> 2304,100 dense_Dense3/kernel
15 -> 100 dense_Dense3/bias
16 -> 100,3 dense_Dense4/kernel
17 -> 3 dense_Dense4/bias
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
Top GitHub Comments
I agree, but I’m not the maintainer. After some debugging, I traced it down to the fact that models which are marked as untrainable return their weights in a different order than trainable models. You can fix the example code by removing the call which marks the model as untrainable.
I was thinking of submitting a PR for the tfjs-examples repository so at least the example code might work OOTB.
This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there.