question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Tensorflow QAT] AttributeError: 'NoneType' object has no attribute 'graph_def'

See original GitHub issue

Environment: Google Colab LPOT Version: 1.6 Tensorflow Version: Official 2.6.0 (with environment variables set as below) TF_ENABLE_ONEDNN_OPTS=1 TF_ENABLE_MKL_NATIVE_FORMAT=0

I basically followed the qat example provided here. I used a pretrained model that is to be annotated with only Conv2D being quantized, and used the annotated model for model.fit() for several epochs and saved the model. After that, I use LPOT ModelConversion to convert the model, and the following error occurs:

2021-09-10 03:07:43 [INFO] Pass Quantization elapsed time: 7581.68 ms
2021-09-10 03:07:44 [INFO] Pass FreezeFakeQuantOpOptimizer elapsed time: 283.8 ms
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py", line 534, in quantize
    self._fuse_requantize_with_fused_quantized_node()
  File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py", line 698, in _fuse_requantize_with_fused_quantized_node
    self.device).do_transformation()
  File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_rewriter/int8/fuse_conv_requantize.py", line 47, in __init__
    self.graph_info = self.graph_analyzer.parse_graph()
  File "/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_rewriter/graph_util.py", line 611, in parse_graph
    each_input)].outputs.append(node_name)
KeyError: 'model_3/quant_31/StatefulPartitionedCall/StatefulPartitionedCall/MovingAvgQuantize/FakeQuantWithMinMaxVars'
2021-09-10 03:07:44 [ERROR] Fail to quantize graph due to 'model_3/quant_31/StatefulPartitionedCall/StatefulPartitionedCall/MovingAvgQuantize/FakeQuantWithMinMaxVars'.
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-3-515087c4513a> in <module>()
      4 conversion.destination = 'default'
      5 conversion.model = common.Model('./q_aware_model')
----> 6 q_model = conversion()
      7 q_model.save('quantized_model')

2 frames
/usr/local/lib/python3.7/dist-packages/lpot/experimental/model_conversion.py in __call__(self)
     94 
     95         self.adaptor = FRAMEWORKS[self.framework](framework_specific_info)
---> 96         q_model = self.adaptor.convert(self._model, self._source, self._destination)
     97 
     98         # when eval_func is None but metric or _eval_dataloader is set by yaml or code,

/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tensorflow.py in convert(self, model, source, destination)
    814                                    fake_quant=True)
    815 
--> 816         return converter.convert()
    817 
    818     @dump_elapsed_time("Pass recover model")

/usr/local/lib/python3.7/dist-packages/lpot/adaptor/tf_utils/graph_converter.py in convert(self)
    247         if len(self.bf16_ops) > 0:
    248             model = self.bf16_convert()
--> 249         post_cse_graph_def = PostCseOptimizer(model.graph_def).do_transformation()
    250         post_cse_graph_def.library.CopyFrom(self.model.graph_def.library)
    251         model.graph_def = post_cse_graph_def

AttributeError: 'NoneType' object has no attribute 'graph_def'

My original code (simplified):

model = tf.keras.models.load_model('model')

import tensorflow_model_optimization as tfmot

def apply_quantization_to_Conv2D(layer):
  if isinstance(layer, tf.keras.layers.Conv2D):
    return tfmot.quantization.keras.quantize_annotate_layer(layer)
  return layer

annotated_model = tf.keras.models.clone_model(model, clone_function=apply_quantization_to_Conv2D)

q_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
q_aware_model.summary()

q_aware_model.compile(optimizer='adam', loss='mse')
q_aware_model.fit(x=[X_q, X_norm_q], y=y_q, 
                  batch_size=64,
                  epochs=45)
q_aware_model.save('./q_aware_model')

from lpot.experimental import ModelConversion, common
conversion = ModelConversion()
conversion.source = 'QAT'
conversion.destination = 'default'
conversion.model = common.Model('./q_aware_model')
q_model = conversion()
q_model.save('quantized_model')

Please find model here. Thanks!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:25 (10 by maintainers)

github_iconTop GitHub Comments

1reaction
Zhiwei35commented, Sep 16, 2021

@peiwenhuang27 I used LPOT v1.6 with pip install lpot and intel-tensorflow 2.6.0 to run your script just now. And I got the same result and model as Guoming pasted above. Our logs are:

2021-09-16 15:20:01 [WARNING] From /home2/zhiweihu/anaconda3/envs/inteltf26/lib/python3.6/site-packages/lpot/adaptor/tf_utils/util.py:318: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph 2021-09-16 15:20:01 [INFO] Pass StripUnusedNodesOptimizer elapsed time: 215.19 ms 2021-09-16 15:20:02 [INFO] Pass GraphCseOptimizer elapsed time: 55.22 ms 2021-09-16 15:20:02 [INFO] Pass FoldBatchNormNodesOptimizer elapsed time: 53.03 ms 2021-09-16 15:20:02 [INFO] Pass UpdateEnterOptimizer elapsed time: 51.38 ms 2021-09-16 15:20:02 [INFO] Pass ConvertLeakyReluOptimizer elapsed time: 52.53 ms 2021-09-16 15:20:02 [INFO] Pass InjectDummyBiasAddOptimizer elapsed time: 55.38 ms 2021-09-16 15:20:02 [INFO] Pass ConvertAddToBiasAddOptimizer elapsed time: 52.79 ms 2021-09-16 15:20:02 [INFO] Pass FuseTransposeReshapeOptimizer elapsed time: 52.62 ms 2021-09-16 15:20:02 [INFO] Pass FuseConvWithMathOptimizer elapsed time: 52.61 ms 2021-09-16 15:20:02 [WARNING] Node name unused_control_flow_input_47 specified in yaml doesn’t exist in the model. 2021-09-16 15:20:02 [WARNING] Found possible input node names: [‘input_noisy’, ‘input_noisy_norm’], output node names: [‘outputMask’]. 2021-09-16 15:20:02 [INFO] Pass Pre Optimization elapsed time: 5291.59 ms 2021-09-16 15:20:02 [WARNING] Node name unused_control_flow_input_47 specified in yaml doesn’t exist in the model. 2021-09-16 15:20:02 [WARNING] Found possible input node names: [‘input_noisy’, ‘input_noisy_norm’], output node names: [‘outputMask’]. 2021-09-16 15:20:10 [WARNING] Found possible input node names: [‘input_noisy’, ‘input_noisy_norm’], output node names: [‘outputMask’]. 2021-09-16 15:20:17 [WARNING] Found possible input node names: [‘input_noisy’, ‘input_noisy_norm’], output node names: [‘outputMask’]. 2021-09-16 15:20:21.749364: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support) 2021-09-16 15:20:21.749487: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session 2021-09-16 15:20:21.777931: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: graph_to_optimize function_optimizer: Graph size after: 631 nodes (370), 802 edges (528), time = 7.764ms. function_optimizer: function_optimizer did nothing. time = 0.219ms.

2021-09-16 15:20:22.407978: I tensorflow/core/grappler/devices.cc:75] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA or ROCm support) 2021-09-16 15:20:22.408114: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session 2021-09-16 15:20:22.575757: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1137] Optimization results for grappler item: tf_graph constant_folding: Graph size after: 422 nodes (-88), 506 edges (-96), time = 61.435ms. constant_folding: Graph size after: 422 nodes (0), 506 edges (0), time = 14.949ms.

2021-09-16 15:20:22.968985: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance. 2021-09-16 15:20:22 [WARNING] From /home2/zhiweihu/anaconda3/envs/inteltf26/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:206: quantize_v2 (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25. Instructions for updating: tf.quantize_v2 is deprecated, please use tf.quantization.quantize instead. 2021-09-16 15:20:23 [INFO] Not find expected types on inputs Const, Const. 2021-09-16 15:20:24 [INFO] Pass Quantization elapsed time: 1126.1 ms 2021-09-16 15:20:24 [INFO] Pass FreezeFakeQuantOpOptimizer elapsed time: 32.31 ms 2021-09-16 15:20:24 [INFO] Pass StripUnusedNodesOptimizer elapsed time: 96.96 ms 2021-09-16 15:20:24 [INFO] Pass RemoveTrainingNodesOptimizer elapsed time: 32.91 ms 2021-09-16 15:20:24 [INFO] Pass FoldBatchNormNodesOptimizer elapsed time: 32.82 ms 2021-09-16 15:20:24 [INFO] Pass MetaOpOptimizer elapsed time: 30.93 ms 2021-09-16 15:20:24 [WARNING] Node name unused_control_flow_input_29 specified in yaml doesn’t exist in the model. 2021-09-16 15:20:24 [WARNING] Found possible input node names: [‘input_noisy’, ‘input_noisy_norm’], output node names: [‘outputMask’]. 2021-09-16 15:20:25 [INFO] Pass PostCseOptimizer elapsed time: 1267.08 ms 2021-09-16 15:20:25 [WARNING] From /home2/zhiweihu/anaconda3/envs/inteltf26/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info. 2021-09-16 15:20:25 [INFO] No assets to save. 2021-09-16 15:20:25 [INFO] No assets to write. 2021-09-16 15:20:25 [INFO] SavedModel written to: /home2/zhiweihu/work/quantized_model_lpot16/saved_model.pb 2021-09-16 15:20:25 [INFO] Save quantized model to /home2/zhiweihu/work/quantized_model_lpot16.

model we used is here . I guess maybe your used a WRONG model.

1reaction
Zhiwei35commented, Sep 14, 2021

@guomingz After look at the source code, I found that In graph_converter.py def _fuse_requantize_with_fused_quantized_node(self): line 691: I found out that before

if self.fake_quant:
            self._tmp_graph_def = FreezeFakeQuantOpOptimizer(
                self._tmp_graph_def).do_transformation()

the node still exists in the graph, but after transformation, the node disappears from the graph, but is still referenced as input by some other nodes in the graph. Thus, in FuseConvRequantizeTransformer def __init__(), when initializing with parse_graph, the following

for node_name, node_details in self.node_name_details.items():
            # update the upper node's output infomation.
            for each_input in node_details.node.input:
                self.node_name_details[GraphRewriterHelper.node_name_from_input(
                    each_input)].outputs.append(node_name)

will trigger a KeyError @peiwenhuang27 OK,I get your point and I will reproduce the issue first, then give you a response

Read more comments on GitHub >

github_iconTop Results From Across the Web

module 'tensorflow' has no attribute 'GraphDef' - Stack Overflow
import - od_graph_def = tf. GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef' - Stack Overflow. Stack Overflow for ...
Read more >
AttributeError: 'NoneType' object has no attribute 'take'
I want to train an object detector based on the Train a salad detector with TensorFlow Lite Model Maker notebook, but I'm using...
Read more >
tf.saved_model.save | TensorFlow v2.11.0
This is a reserved attribute: tf.saved_model.save on an object with a custom .signatures attribute will raise an exception.
Read more >
tf.keras.callbacks.TensorBoard | TensorFlow v2.11.0
TensorBoard is a visualization tool provided with TensorFlow. This callback logs events for TensorBoard, including: Metrics summary plots; Training graph ...
Read more >
tf.train.Checkpoint | TensorFlow v2.11.0
Unlike assert_consumed , this assertion will pass if values in the checkpoint have no corresponding Python objects. For example a tf.keras.Layer ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found