Reshape operation is removed during conversion with tensorflowjs_converter
See original GitHub issueFrom @kimamula on April 3, 2018 17:17
TensorFlow.js Converter version
0.1.0
Browser version
Chrome 65.0.3325.181 (64bit)
Describe the problem or feature request
When I execute a model (i.e., FrozenModel#execute()
) which is converted by tensorflowjs_converter
and loaded with loadFrozenModel()
, it fails with an error Error in matMul: inputs must be rank 2, got ranks 1 and 2.
I compared the .pb
files before and after the conversion and found that a reshape operation is removed during the conversion.
- Before the conversion (showing only a part of the whole
.pb
file)Squeeze
->Reshape
->PlaceholderWithDefault
->MatMul
...
!MobilenetV1/Logits/SpatialSqueezeSqueeze(MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd*
T0*
squeeze_dims
Z
%MobilenetV1/Predictions/Reshape/shapeConst*
valueB"�����*
dtype0
�
MobilenetV1/Predictions/ReshapeReshape!MobilenetV1/Logits/SpatialSqueeze%MobilenetV1/Predictions/Reshape/shape*
Tshape0*
T0
�
"input_1/BottleneckInputPlaceholderPlaceholderWithDefaultMobilenetV1/Predictions/Reshape*
dtype0*
shape:����������
�^
...
#final_training_ops/Wx_plus_b/MatMulMatMul"input_1/BottleneckInputPlaceholder-final_training_ops/weights/final_weights/read*
transpose_a(�*
transpose_b(�*
...
- After the conversion
Squeeze
->PlaceholderWithDefault
->MatMul
(Reshape
dissappeared)
...
!MobilenetV1/Logits/SpatialSqueezeSqueeze(MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd*
squeeze_dims
*
T0
�
"input_1/BottleneckInputPlaceholderPlaceholderWithDefault!MobilenetV1/Logits/SpatialSqueeze*
dtype0*
shape:����������
�
#final_training_ops/Wx_plus_b/MatMulMatMul"input_1/BottleneckInputPlaceholder(final_training_ops/weights/final_weights*
transpose_a(�*
transpose_b(�*
...
I confirmed that when I modify matrices_executor.ts
as follows, the loaded model works as expected.
export let executeOp: OpExecutor =
(node: Node, tensorMap: NamedTensorsMap): tfc.Tensor[] => {
switch (node.op) {
case 'matMul':
+ const a = getParamValue('a', node, tensorMap) as tfc.Tensor2D;
+ const b = getParamValue('b', node, tensorMap) as tfc.Tensor2D;
+ if (a.rank === 1 && b.rank === 2) {
+ return [tfc.vectorTimesMatrix(a, b)];
+ }
return [tfc.matMul(
+ a,
+ b,
- getParamValue('a', node, tensorMap) as tfc.Tensor2D,
- getParamValue('b', node, tensorMap) as tfc.Tensor2D,
getParamValue('transposeA', node, tensorMap) as boolean,
getParamValue('transposeB', node, tensorMap) as boolean)];
case 'transpose':
return [tfc.transpose(
getParamValue('x', node, tensorMap) as tfc.Tensor,
getParamValue('perm', node, tensorMap) as number[])];
default:
throw TypeError(`Node type ${node.op} is not implemented`);
}
};
Code to reproduce the bug / link to feature request
I prepared my original model by retraining MobileNet on my own categories as described in TensorFlow For Poets codelab.
Then I converted the resulting model to the SavedModel format with the following script.
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
export_dir = 'path/to/saved_model'
graph_pb = 'path/to/original_pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name='')
g = tf.get_default_graph()
inp = g.get_tensor_by_name('input:0')
out = g.get_tensor_by_name('final_result:0')
predict_signature = tf.saved_model.signature_def_utils.predict_signature_def({'input': inp}, {'output': out})
builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING], signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: predict_signature
})
builder.save()
Finally, I converted the SavedModel with tensorflowjs_converter
as follows.
tensorflowjs_converter \
--input_format=tf_saved_model \
--saved_model_tags=serve \
--output_node_names="final_result" \
path/to/saved_model \
path/to/output
Copied from original issue: tensorflow/tfjs-core#919
Issue Analytics
- State:
- Created 5 years ago
- Comments:23 (7 by maintainers)
Top GitHub Comments
Thanks @rmlarsen, the model @kimamula shared uses deprecated param name for Squeeze op, which is ignored by the FrozenModel, just added support for that, and model is executed without error. @tushuhei you can checkout the latest converter or wait for 0.2.0 release.
Yes. The solution was reshaping the input with
[1, IMAGE_SIZE, IMAGE_SIZE, 3]
.