Change model static shape to dynamic shape
See original GitHub issueAsk a Question
Question
I have an ONNX model converted from Keras saved model using tf2onnx, which consists of two inputs of static shapes:
(64, 60, 257) (64, 257, 60, 1)
I want to change the model shape to dynamic as follows:
(?, ?, 257) (?, 257, ?, 1)
Is there anyway to achieve this using pure ONNX?
Further information
-
Relevant Area (e.g. model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators): I want to use this model in real-time inference where the 1st and 3rd dimensions are both 1 (i.e. shape = [1, 1, 257], [1, 257, 1, 1]), but during training the dimensions are set to a fixed value.
-
Is this issue related to a specific model?
Model name (e.g. mnist): my model link Model opset (e.g. 7): 11
Notes
I have tried to change the shape with onnxruntime
like so:
# load model into inference session
ONNX_PATH = './model/model.onnx'
model = onnx.load(ONNX_PATH)
# input0: 64, 60, 257 -> N, N, 257
# input1: 64, 257, 60, 1 -> N, 257, N, 1
model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'batch_size'
model.graph.input[0].type.tensor_type.shape.dim[1].dim_param = 'seq_len'
model.graph.input[1].type.tensor_type.shape.dim[0].dim_param = 'batch_size'
model.graph.input[1].type.tensor_type.shape.dim[2].dim_param = 'seq_len'
ONNX_PATH = './model/dynamic_model.onnx'
onnx.save(model, ONNX_PATH)
Then I tried to inference with the following:
so = onnxruntime.SessionOptions()
so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL
ort_session = onnxruntime.InferenceSession(ONNX_PATH, sess_options=so)
ort_session.set_providers(['CPUExecutionProvider'])
input_name0 = ort_session.get_inputs()[0].name
input_name1 = ort_session.get_inputs()[1].name
output_name = ort_session.get_outputs()[0].name
preds = ort_session.run([output_name], {input_name0: x[:32], input_name1: x_norm[:32]})
When running predictions in the last line, the error occurs:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:‘StatefulPartitionedCall/model/1/Conv2D__36’ Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{32,257,60,1}, requested shape:{64,1,257,60}
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (3 by maintainers)
As you said, setting the dynamic shape before conversion is critical in my problem. After setting it first before converting the model, the inference works fine, so closing the issue now. Thank you greatly for your suggestion!
@peiwenhuang27, How did you set the dynamic shape before conversion? I have a pretrained tflite model with input shape (1,1260,960,3) and I want it to be (1,-1,-1,3). I tried to set dynamic shape during conversion by passing the arguments --inputs input_name[1,-1,-1,3] and then cleared the dim_value. but still facing the issue you faced before.