InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU. Why this happen after onnx2keras?
See original GitHub issueHi again,
I have done this steps:
onnx_model = onnx.load(FILE_PATH+"mnist_test.onnx")
k_model_onnx = onnx_to_keras(onnx_model, ['input_1'], name_policy="short")
k_model_onnx.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 28, 28, 1)] 0
__________________________________________________________________________________________________
adjusted (Permute) (None, 1, 28, 28) 0 input_1[0][0]
__________________________________________________________________________________________________
convolut (Conv2D) (None, 32, 26, 26) 320 adjusted[0][0]
__________________________________________________________________________________________________
conv2d/I (Activation) (None, 32, 26, 26) 0 convolut[0][0]
__________________________________________________________________________________________________
convolut_1 (Conv2D) (None, 64, 24, 24) 18496 conv2d/I[0][0]
__________________________________________________________________________________________________
conv2d_1 (Activation) (None, 64, 24, 24) 0 convolut_1[0][0]
__________________________________________________________________________________________________
conv2d_1_1_pad (ZeroPadding2D) (None, 64, 24, 24) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_1_1 (MaxPooling2D) (None, 64, 12, 12) 0 conv2d_1_1_pad[0][0]
__________________________________________________________________________________________________
conv2d_1_2 (Permute) (None, 12, 12, 64) 0 conv2d_1_1[0][0]
__________________________________________________________________________________________________
flatten/ (Reshape) (None, None) 0 conv2d_1_2[0][0]
__________________________________________________________________________________________________
transfor_reshape (Reshape) (None, 9216) 0 flatten/[0][0]
__________________________________________________________________________________________________
transfor (Dense) (None, 128) 1179648 transfor_reshape[0][0]
__________________________________________________________________________________________________
biased_t_const2 (Lambda) (128,) 0 input_1[0][0]
__________________________________________________________________________________________________
biased_t (Lambda) (None, 128) 0 transfor[0][0]
biased_t_const2[0][0]
__________________________________________________________________________________________________
dense/Id (Activation) (None, 128) 0 biased_t[0][0]
__________________________________________________________________________________________________
transfor_1 (Dense) (None, 10) 1280 dense/Id[0][0]
__________________________________________________________________________________________________
biased_t_1_const2 (Lambda) (10,) 0 input_1[0][0]
__________________________________________________________________________________________________
biased_t_1 (Lambda) (None, 10) 0 transfor_1[0][0]
biased_t_1_const2[0][0]
__________________________________________________________________________________________________
dense_1/ (Activation) (None, 10) 0 biased_t_1[0][0]
==================================================================================================
Total params: 1,199,744
Trainable params: 1,199,744
Non-trainable params: 0
__________________________________________________________________________________________________
y_pred_onnx = k_model_onnx.predict(x_test)
Result :
Tensor("model/transfor/MatMul:0", shape=(None, 128), dtype=float32) Tensor("model/biased_t_const2/Const:0", shape=(128,), dtype=float32)
Tensor("model/transfor_1/MatMul:0", shape=(None, 10), dtype=float32) Tensor("model/biased_t_1_const2/Const:0", shape=(10,), dtype=float32)
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-16-c199c87d14d2> in <module>()
----> 1 y_pred_onnx = k_model_onnx.predict(x_test)
7 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model/conv2d_1_1/MaxPool (defined at <ipython-input-16-c199c87d14d2>:1) ]] [Op:__inference_predict_function_2104]
Function call stack:
predict_function
I have no idea what this mean and why it has happen. Thanks and sorry!
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (1 by maintainers)
Top Results From Across the Web
MaxPoolingOp only supports NHWC on device type CPU ...
However, the error Default MaxPoolingOp only supports NHWC on device type CPU means that the model only can accept inputs of the form...
Read more >Crash: Default MaxPoolingOp only supports NHWC on device ...
Re: Crash: Default MaxPoolingOp only supports NHWC on device type CPU. You are running in CPU mode. This is most likely due to...
Read more >Default MaxPoolingOp only supports NHWC on device type ...
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU ... InvalidArgumentError: Invalid device ordinal value (3).
Read more >MaxPoolingOp only supports NHWC on device type CPU_城 ...
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU ... InvalidArgumentError: Invalid device ordinal value (3).
Read more >The onnx2keras from gmalivenko - Giter Site
The Keras HDF5 model after importing it from ONNX using onnx2keras, ... InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

But after this, the shape doesn’t match. I have create the model in keras after I have saved model.h5 and model.onnx. In another project I have load the model.h5 and predict x_test and thats ok and now I am trying to load model.onnx and convert to keras model (change_ordering=True, name_policy=“short”) and when I try to predict using this model I have shape does not match.
I tryed to use the argument input_shape=, but not sucessfull
Hello @M-Tonin.
There are 2 options to fix:
change_ordering=Trueargument to the converter call (it’s experimental and may fail)