question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue with loading quantization aware trained model

See original GitHub issue

Describe the bug Unable to load the saved model after applying quantization aware training.

System information

TensorFlow version (installed from source or binary): 2.2 TensorFlow Model Optimization version (installed from source or binary): 0.3.0

Code to reproduce the issue Please find the gist of the code here https://gist.github.com/peri044/00a477b73d01bd08ef3410c15679a91c#file-sample-py-L47

Error occurs at tf.keras.models.load_model() function. If I replace this with tf.saved_model.load(), I see the same error too. Any suggestions are appreciated. Thank you !! Error :

model = tf_load.load_internal(path, loader_cls=KerasObjectLoader) File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 604, in load_internal export_dir) File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 134, in _load_all self._load_nodes() File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py”, line 264, in _load_nodes node, setter = self._recreate(proto, node_id) packages/tensorflow/python/saved_model/load.py", line 398, in _recreate_function proto, self._concrete_functions), setattr File “/home/dperi/Downloads/py3/lib/python3.6/site-packages/tensorflow/python/saved_model/function_deserialization.py”, line 265, in recreate_function concrete_function_objects.append(concrete_functions[concrete_function_name]) KeyError: ‘__inference_conv2d_layer_call_and_return_conditional_losses_5068’

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:1
  • Comments:13 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
joyalbincommented, Jun 18, 2020

@peri044 Can you please try with below changes

 with tfmot.quantization.keras.quantize_scope():                                 
    model = tf.keras.models.load_model('saved_model')  
0reactions
pinaxe1commented, Feb 1, 2022

@Janus-Shiau @nutsiepully I’ve got the same issue on TF 2.7.0. but the solution proposed by @joyalbin works for me just fine.
See two lines below.

with tfmot.quantization.keras.quantize_scope():
model = tf.keras.models.load_model(‘saved_model’)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Quantization aware training comprehensive guide - TensorFlow
Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between ......
Read more >
Optimizing Models with Quantization-Aware Training in Keras
We conclude that both - Post Training Quantization and Quantization-Aware Training helped us increase the model accuracy slightly. However, with ...
Read more >
PyTorch Quantization Aware Training - Lei Mao's Log Book
# Train model. ... # Save model. ... # Load a pretrained model. ... # Move the model to CPU since static quantization...
Read more >
How to upload a quantized model? - Hugging Face Forums
The problem is quantized weights is not enough for PyTorch INT8 inference. It's a defect in PyTorch quantization implementation, which only allow on-the-fly ......
Read more >
Quantization - jacinto-ai/pytorch-jacinto-ai-devkit
- Step 2: Starting from the floating point model as pretrained weights, do Quantization Aware Training. In order to do this wrap your...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found