question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error converting model from keras using tensorflow-gpu backend

See original GitHub issue

$pip list onnx 1.4.1 onnxmltools 1.3.2 tensorflow 1.13.1 tensorflow-gpu 1.13.1

$conda list keras 2.2.4

import onnxmltools
from keras.layers import Input, Dense, Add
from keras.models import Model

# N: batch size, C: sub-model input dimension, D: final model's input dimension
N, C, D = 2, 3, 3

# Define a sub-model, it will become a part of our final model
sub_input1 = Input(shape=(C,))
sub_mapped1 = Dense(D)(sub_input1)
sub_model1 = Model(inputs=sub_input1, outputs=sub_mapped1)

# Define another sub-model, it will become a part of our final model
sub_input2 = Input(shape=(C,))
sub_mapped2 = Dense(D)(sub_input2)
sub_model2 = Model(inputs=sub_input2, outputs=sub_mapped2)

# Define a model built upon the previous two sub-models
input1 = Input(shape=(D,))
input2 = Input(shape=(D,))
mapped1_2 = sub_model1(input1)
mapped2_2 = sub_model2(input2)
sub_sum = Add()([mapped1_2, mapped2_2])
keras_model = Model(inputs=[input1, input2], output=sub_sum)

# Convert it! The target_opset parameter is optional.
onnx_model = onnxmltools.convert_keras(keras_model, target_opset=7) 

Error:

C:\Users\intel\Anaconda3\envs\denoiser\lib\site-packages\ipykernel_launcher.py:24: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=[<tf.Tenso..., outputs=Tensor("ad...)`
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-13c32f7fd452> in <module>
     25 
     26 # Convert it! The target_opset parameter is optional.
---> 27 onnx_model = onnxmltools.convert_keras(keras_model, target_opset=7)

~\AppData\Roaming\Python\Python36\site-packages\onnxmltools\convert\main.py in convert_keras(model, name, initial_types, doc_string, target_opset, targeted_onnx, channel_first_inputs, custom_conversion_functions, custom_shape_calculators, default_batch_size)
     30 
     31     from keras2onnx import convert_keras as convert
---> 32     return convert(model, name, doc_string, target_opset, channel_first_inputs)
     33 
     34 

~\AppData\Roaming\Python\Python36\site-packages\keras2onnx\main.py in convert_keras(model, name, doc_string, target_opset, channel_first_inputs, debug_mode, custom_op_conversions)
     92     tf_graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_node_names=out_node)
     93     return _convert_tf(name, tf_graph_def, op_dict, output_names, target_opset, doc_string, channel_first_inputs,
---> 94                        debug_mode, custom_op_conversions)
     95 
     96 

~\AppData\Roaming\Python\Python36\site-packages\keras2onnx\main.py in _convert_tf(name, tf_graph_def, keras_op_table, output_names, target_opset, doc_string, channel_first_inputs, debug_mode, custom_op_conversions)
     64         topology.compile()
     65 
---> 66         return convert_topology(topology, name, doc_string, target_opset, channel_first_inputs)
     67 
     68 

~\AppData\Roaming\Python\Python36\site-packages\keras2onnx\topology.py in convert_topology(topology, model_name, doc_string, target_opset, channel_first_inputs)
    232         scope = next(scope for scope in topology.scopes if scope.name == operator.scope)
    233         keras2onnx_logger().debug("Converting the operator (%s): %s" % (operator.full_name, operator.type))
--> 234         get_converter(operator.type)(scope, operator, container)
    235 
    236     # When calling ModelComponentContainer's add_initializer(...), nothing is added into the input list. However, in

~\AppData\Roaming\Python\Python36\site-packages\keras2onnx\wrapper.py in tfnode_convert(varset, operator, container)
     51     # create input_tensor_values, initializers
     52     # if initilizer is not used as input by any node, then it will be ignored
---> 53     initializers = [i for i in list(g.initializers.values()) if i.name in all_inputs]
     54     for init_tensor_ in initializers:
     55         init_tensor_.name = varset.get_local_variable_or_declare_one(init_tensor_.name).full_name.encode('utf-8')

AttributeError: 'Graph' object has no attribute 'initializers'

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
jiafatomcommented, Mar 18, 2019

@arielbernal @cyroxx @cwx123147 We just update the latest keras2onnx, please re install. Thanks. Below is my suggested workaround which you can ignore.


Please install keras2onnx through the GitHub one here https://github.com/onnx/keras-onnx.git Then it should work.

The reason is that keras2onnx uses an API in tensorflow-onnx, and their API made a breaking change, which causes this AttributeError. We fix it in keras2onnx, will publish a new pypi soon. So please install keras2onnx from GitHub as mentioned above.

0reactions
arielbernalcommented, Mar 19, 2019

@jiafatom @cyroxx Awesome. confirmed working!

Read more comments on GitHub >

github_iconTop Results From Across the Web

How ensure that Keras is using GPU with tensorflow backend?
Another thing you can try is to force a GPU device with: with tf.device('/gpu:0'): before declaring your model. – Daniel Möller. Apr 21,...
Read more >
Keras load model throwing no custom loss function while ...
I installed tf-nightly but it is the same. I just want to load the model to see the model input and output names....
Read more >
Keras getting an error when using TensorFlow-gpu
I'm trying to run Keras with TensorFlow as a backend, but I want to run it on my GPU. I installed TensorFlow-gpu, CUDA...
Read more >
tf.keras.backend.clear_session | TensorFlow v2.11.0
Used in the notebooks​​ Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer...
Read more >
Keras: the Python deep learning API
Take advantage of the full deployment capabilities of the TensorFlow platform. You can export Keras models to JavaScript to run directly in the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found