Issue with loading quantization aware trained model
See original GitHub issueDescribe the bug Unable to load the saved model after applying quantization aware training.
System information
TensorFlow version (installed from source or binary): 2.2 TensorFlow Model Optimization version (installed from source or binary): 0.3.0
Code to reproduce the issue Please find the gist of the code here https://gist.github.com/peri044/00a477b73d01bd08ef3410c15679a91c#file-sample-py-L47
Error occurs at tf.keras.models.load_model() function. If I replace this with tf.saved_model.load(), I see the same error too. Any suggestions are appreciated. Thank you !!
Error :
model = tf_load.load_internal(path, loader_cls=KerasObjectLoader) File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 604, in load_internal export_dir) File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 134, in _load_all self._load_nodes() File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 264, in _load_nodes node, setter = self._recreate(proto, node_id) packages/tensorflow/python/saved_model/load.py", line 398, in _recreate_function proto, self._concrete_functions), setattr File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/function_deserialization.py”, line 265, in recreate_function concrete_function_objects.append(concrete_functions[concrete_function_name]) KeyError: ‘__inference_conv2d_layer_call_and_return_conditional_losses_5068’
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:13 (3 by maintainers)

Top Related StackOverflow Question
@peri044 Can you please try with below changes
@Janus-Shiau @nutsiepully I’ve got the same issue on TF 2.7.0. but the solution proposed by @joyalbin works for me just fine.
See two lines below.
with tfmot.quantization.keras.quantize_scope():
model = tf.keras.models.load_model(‘saved_model’)