question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU. Why this happen after onnx2keras?

See original GitHub issue

Hi again,

I have done this steps:

onnx_model = onnx.load(FILE_PATH+"mnist_test.onnx")
k_model_onnx = onnx_to_keras(onnx_model, ['input_1'], name_policy="short")
k_model_onnx.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 28, 28, 1)]  0                                            
__________________________________________________________________________________________________
adjusted (Permute)              (None, 1, 28, 28)    0           input_1[0][0]                    
__________________________________________________________________________________________________
convolut (Conv2D)               (None, 32, 26, 26)   320         adjusted[0][0]                   
__________________________________________________________________________________________________
conv2d/I (Activation)           (None, 32, 26, 26)   0           convolut[0][0]                   
__________________________________________________________________________________________________
convolut_1 (Conv2D)             (None, 64, 24, 24)   18496       conv2d/I[0][0]                   
__________________________________________________________________________________________________
conv2d_1 (Activation)           (None, 64, 24, 24)   0           convolut_1[0][0]                 
__________________________________________________________________________________________________
conv2d_1_1_pad (ZeroPadding2D)  (None, 64, 24, 24)   0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_1_1 (MaxPooling2D)       (None, 64, 12, 12)   0           conv2d_1_1_pad[0][0]             
__________________________________________________________________________________________________
conv2d_1_2 (Permute)            (None, 12, 12, 64)   0           conv2d_1_1[0][0]                 
__________________________________________________________________________________________________
flatten/ (Reshape)              (None, None)         0           conv2d_1_2[0][0]                 
__________________________________________________________________________________________________
transfor_reshape (Reshape)      (None, 9216)         0           flatten/[0][0]                   
__________________________________________________________________________________________________
transfor (Dense)                (None, 128)          1179648     transfor_reshape[0][0]           
__________________________________________________________________________________________________
biased_t_const2 (Lambda)        (128,)               0           input_1[0][0]                    
__________________________________________________________________________________________________
biased_t (Lambda)               (None, 128)          0           transfor[0][0]                   
                                                                 biased_t_const2[0][0]            
__________________________________________________________________________________________________
dense/Id (Activation)           (None, 128)          0           biased_t[0][0]                   
__________________________________________________________________________________________________
transfor_1 (Dense)              (None, 10)           1280        dense/Id[0][0]                   
__________________________________________________________________________________________________
biased_t_1_const2 (Lambda)      (10,)                0           input_1[0][0]                    
__________________________________________________________________________________________________
biased_t_1 (Lambda)             (None, 10)           0           transfor_1[0][0]                 
                                                                 biased_t_1_const2[0][0]          
__________________________________________________________________________________________________
dense_1/ (Activation)           (None, 10)           0           biased_t_1[0][0]                 
==================================================================================================
Total params: 1,199,744
Trainable params: 1,199,744
Non-trainable params: 0
__________________________________________________________________________________________________
y_pred_onnx = k_model_onnx.predict(x_test)

Result :

Tensor("model/transfor/MatMul:0", shape=(None, 128), dtype=float32) Tensor("model/biased_t_const2/Const:0", shape=(128,), dtype=float32)
Tensor("model/transfor_1/MatMul:0", shape=(None, 10), dtype=float32) Tensor("model/biased_t_1_const2/Const:0", shape=(10,), dtype=float32)
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-16-c199c87d14d2> in <module>()
----> 1 y_pred_onnx = k_model_onnx.predict(x_test)

7 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

InvalidArgumentError:  Default MaxPoolingOp only supports NHWC on device type CPU
	 [[node model/conv2d_1_1/MaxPool (defined at <ipython-input-16-c199c87d14d2>:1) ]] [Op:__inference_predict_function_2104]

Function call stack:
predict_function

I have no idea what this mean and why it has happen. Thanks and sorry!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
M-Tonincommented, Apr 16, 2020

But after this, the shape doesn’t match. I have create the model in keras after I have saved model.h5 and model.onnx. In another project I have load the model.h5 and predict x_test and thats ok and now I am trying to load model.onnx and convert to keras model (change_ordering=True, name_policy=“short”) and when I try to predict using this model I have shape does not match.

WARNING:tensorflow:Model was constructed with shape (None, 28, 1, 28) for input Tensor("input_1_55:0", shape=(None, 28, 1, 28), dtype=float32), but it was called on an input with incompatible shape (None, 28, 28).
WARNING:tensorflow:Model was constructed with shape (None, 28, 1, 28) for input Tensor("input_1_55:0", shape=(None, 28, 1, 28), dtype=float32), but it was called on an input with incompatible shape (None, 28, 28).
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-72-b08c625d5e00> in <module>()
----> 1 y_predas = k_model_onnx.predict(x_test)

10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    966           except Exception as e:  # pylint:disable=broad-except
    967             if hasattr(e, "ag_error_metadata"):
--> 968               raise e.ag_error_metadata.to_exception(e)
    969             else:
    970               raise

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1150 predict_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1125 predict_step  **
        return self(x, training=False)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:927 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py:719 call
        convert_kwargs_to_constants=base_layer_utils.call_context().saving)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py:888 _run_internal_graph
        output_tensors = layer(computed_tensors, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:886 __call__
        self.name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:180 assert_input_compatibility
        str(x.shape.as_list()))

    ValueError: Input 0 of layer adjusted is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 28, 28]

I tryed to use the argument input_shape=, but not sucessfull

1reaction
gmalivenkocommented, Apr 16, 2020

Hello @M-Tonin.

InvalidArgumentError:  Default MaxPoolingOp only supports NHWC on device type CPU

There are 2 options to fix:

  1. You can try save your model and then load using CPU device
  2. You can try to pass change_ordering=True argument to the converter call (it’s experimental and may fail)
Read more comments on GitHub >

github_iconTop Results From Across the Web

MaxPoolingOp only supports NHWC on device type CPU ...
However, the error Default MaxPoolingOp only supports NHWC on device type CPU means that the model only can accept inputs of the form...
Read more >
Crash: Default MaxPoolingOp only supports NHWC on device ...
Re: Crash: Default MaxPoolingOp only supports NHWC on device type CPU. You are running in CPU mode. This is most likely due to...
Read more >
Default MaxPoolingOp only supports NHWC on device type ...
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU ... InvalidArgumentError: Invalid device ordinal value (3).
Read more >
MaxPoolingOp only supports NHWC on device type CPU_城 ...
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU ... InvalidArgumentError: Invalid device ordinal value (3).
Read more >
The onnx2keras from gmalivenko - Giter Site
The Keras HDF5 model after importing it from ONNX using onnx2keras, ... InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found