Converting model to frozen pb causes original model to go into an "Invalid State"
See original GitHub issueClick to expand!
Issue Type
Support
Source
binary
Tensorflow Version
2.4 / 2.9.2 (Occurs on both)
Custom Code
No
OS Platform and Distribution
Linux Ubuntu 18.04
Python version
3.8
Current Behaviour?
I currently am trying to convert a Tensorflow 2 Keras model into a Tensorflow 1 frozen pb. My code is able to accomplish this and freezes the model correctly. I do this by creating my model, then saving it as an h5, then load that models h5 as a separate model and freeze it.
However, if I try to load and freeze the model and then continue on to use the original model (the untouched one), it’s put into an “Invalid State”. I’ve tried looking to see if there is an issue with the keras backend session being confused or if the two models have the same reference but they are all separate.
It’s as if the original model and the loaded model are the same one. I’m not sure if this is a bug or more likely user error.
Standalone code to reproduce the issue
# create, compile, train original model | or load original model
original_model.save('original_model.h5', save_format='h5')
convert_to_pb('original_model.h5')
original_model.predict(inp) # Error occurs here
-----------------------------------------
# Convert Script
import tensorflow as tf
def convert_to_pb(h5_file):
with tf.compat.v1.keras.backend.get_session() as sess:
model = tf.keras.models.load_model(h5_file)
graph = sess.graph
output_names = [out.op.name for out in model.outputs]
input_graph_def = graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants(sess, input_graph_def, output_names)
frozen_graph = tf.compat.v1.graph_util.remove_training_nodes(frozen_graph)
tf.io.write_graph(frozen_graph, '.', 'frozen_model.pb', as_text=False)
A gist can be found here with the code.
Relevant log output
ValueError: Your Layer or Model is in an invalid state. This can happen for the following cases:
1. You might be interleaving estimator/non-estimator models or interleaving models/layers made in tf.compat.v1.Graph.as_default() with model/layers created outside of it. Converting a model to an estimator (via model_to_estimator) invalidates all models/layers made before the conversion (even if they were not the model converted to an estimator). Similarly, making a layer or a model inside a a tf.compat.v1.Graph invalid$tes all layers/models you previously made outside of the graph.
2. You might be using a custom keras layer implementation with custom __init__ which didn't call super().__init__. Please check the implementation of <class 'tensorflow.python.keras.layers.convolutional.Conv2D'> and its bases.
Issue Analytics
- State:
- Created a year ago
- Comments:10
Top GitHub Comments
@SuryanarayanaY I was able to replicate the issue on colab, please find the gist here. Thank you!
@SuryanarayanaY Hi there. I finally got my config portion to work on my custom layers and this issue is still persisting. Do you have another solution? Thanks!