[Tensorflow QAT] AttributeError: 'NoneType' object has no attribute 'graph_def'
See original GitHub issueEnvironment: Google Colab
LPOT Version: 1.6
Tensorflow Version: Official 2.6.0 (with environment variables set as below)
TF_ENABLE_ONEDNN_OPTS=1
TF_ENABLE_MKL_NATIVE_FORMAT=0
I basically followed the qat example provided here.
I used a pretrained model that is to be annotated with only Conv2D being quantized, and used the annotated model for model.fit()
for several epochs and saved the model.
After that, I use LPOT ModelConversion to convert the model, and the following error occurs:
2021-09-10 03:07:43 [INFO] Pass Quantization elapsed time: 7581.68 ms
2021-09-10 03:07:44 [INFO] Pass FreezeFakeQuantOpOptimizer elapsed time: 283.8 ms
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py", line 534, in quantize
self._fuse_requantize_with_fused_quantized_node()
File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py", line 698, in _fuse_requantize_with_fused_quantized_node
self.device).do_transformation()
File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_rewriter/int8/fuse_conv_requantize.py", line 47, in __init__
self.graph_info = self.graph_analyzer.parse_graph()
File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_rewriter/graph_util.py", line 611, in parse_graph
each_input)].outputs.append(node_name)
KeyError: 'model_3/quant_31/StatefulPartitionedCall/StatefulPartitionedCall/MovingAvgQuantize/FakeQuantWithMinMaxVars'
2021-09-10 03:07:44 [ERROR] Fail to quantize graph due to 'model_3/quant_31/StatefulPartitionedCall/StatefulPartitionedCall/MovingAvgQuantize/FakeQuantWithMinMaxVars'.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-515087c4513a> in <module>()
4 conversion.destination = 'default'
5 conversion.model = common.Model('./q_aware_model')
----> 6 q_model = conversion()
7 q_model.save('quantized_model')
2 frames
/usr/local/lib/python3.7/dist-packages/lpot/experimental/model_conversion.py in __call__(self)
94
95 self.adaptor = FRAMEWORKS[self.framework](framework_specific_info)
---> 96 q_model = self.adaptor.convert(self._model, self._source, self._destination)
97
98 # when eval_func is None but metric or _eval_dataloader is set by yaml or code,
/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tensorflow.py in convert(self, model, source, destination)
814 fake_quant=True)
815
--> 816 return converter.convert()
817
818 @dump_elapsed_time("Pass recover model")
/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py in convert(self)
247 if len(self.bf16_ops) > 0:
248 model = self.bf16_convert()
--> 249 post_cse_graph_def = PostCseOptimizer(model.graph_def).do_transformation()
250 post_cse_graph_def.library.CopyFrom(self.model.graph_def.library)
251 model.graph_def = post_cse_graph_def
AttributeError: 'NoneType' object has no attribute 'graph_def'
My original code (simplified):
model = tf.keras.models.load_model('model')
import tensorflow_model_optimization as tfmot
def apply_quantization_to_Conv2D(layer):
if isinstance(layer, tf.keras.layers.Conv2D):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
annotated_model = tf.keras.models.clone_model(model, clone_function=apply_quantization_to_Conv2D)
q_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
q_aware_model.summary()
q_aware_model.compile(optimizer='adam', loss='mse')
q_aware_model.fit(x=[X_q, X_norm_q], y=y_q,
batch_size=64,
epochs=45)
q_aware_model.save('./q_aware_model')
from lpot.experimental import ModelConversion, common
conversion = ModelConversion()
conversion.source = 'QAT'
conversion.destination = 'default'
conversion.model = common.Model('./q_aware_model')
q_model = conversion()
q_model.save('quantized_model')
Please find model here. Thanks!
Issue Analytics
- State:
- Created 2 years ago
- Comments:25 (10 by maintainers)
Top Results From Across the Web
module 'tensorflow' has no attribute 'GraphDef' - Stack Overflow
import - od_graph_def = tf. GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef' - Stack Overflow. Stack Overflow for ...
Read more >AttributeError: 'NoneType' object has no attribute 'take'
I want to train an object detector based on the Train a salad detector with TensorFlow Lite Model Maker notebook, but I'm using...
Read more >tf.saved_model.save | TensorFlow v2.11.0
This is a reserved attribute: tf.saved_model.save on an object with a custom .signatures attribute will raise an exception.
Read more >tf.keras.callbacks.TensorBoard | TensorFlow v2.11.0
TensorBoard is a visualization tool provided with TensorFlow. This callback logs events for TensorBoard, including: Metrics summary plots; Training graph ...
Read more >tf.train.Checkpoint | TensorFlow v2.11.0
Unlike assert_consumed , this assertion will pass if values in the checkpoint have no corresponding Python objects. For example a tf.keras.Layer ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@peiwenhuang27 I used LPOT v1.6 with pip install lpot and intel-tensorflow 2.6.0 to run your script just now. And I got the same result and model as Guoming pasted above. Our logs are:
model we used is here . I guess maybe your used a WRONG model.