question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Quantizer cannot quantize HBONet

See original GitHub issue

Hi. When we use inc to convert HBONet, we get an exception. Hope you can help me to fix it.

Version info: neural-compressor: 1.13.1 torch: 1.12.1

We use inc like this: https://github.com/intel-analytics/BigDL/blob/2fdd7254f80810c2dab5a2e7e840872bfd59de76/python/nano/src/bigdl/nano/deps/neural_compressor/core/quantization.py#L104

The exception is as follow:

Traceback (most recent call last):
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/experimental/quantization.py", line 148, in execute
    self.strategy.traverse()
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/strategy/strategy.py", line 402, in traverse
    tune_cfg, self.model, self.calib_dataloader, self.q_func)
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/utils/utility.py", line 262, in fi
    res = func(*args, **kwargs)
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/adaptor/onnxrt.py", line 168, in quantize
    quantizer.quantize_model()
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/adaptor/ox_utils/quantizer.py", line 133, in quantize_model
    self.convert_qdq_to_operator_oriented()
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/adaptor/ox_utils/quantizer.py", line 240, in convert_qdq_to_operator_oriented
    op_converter.convert()
  File "/disk3/xingyuan/miniconda3/envs/nano-lxy/lib/python3.7/site-packages/neural_compressor/adaptor/ox_utils/operators/conv.py", line 46, in convert
    inputs.append(parents[0].output[2])
IndexError: list index (2) out of range

Issue Analytics

  • State:open
  • Created 10 months ago
  • Reactions:1
  • Comments:5

github_iconTop GitHub Comments

1reaction
mengniwang95commented, Dec 13, 2022

@hoshibara Hi, do you quantize this model with static or dynamic quantization approach? If the program runs into line 46, the optype of parent[0] should be DynamicQuantizeLinear. Could you check the optype of the parent[0]? Or could you provide the onnx model to us so we can try to reproduce the error?

1reaction
hoshibaracommented, Dec 7, 2022

I quantize model with onnx format. I convert a torch model to onnx model, and then use inc to quantize it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Issues · intel/neural-compressor - GitHub
Issues list ; Quantizer cannot quantize HBONet. #193 opened 9 days ago ; can not conduct per-tensor quantisation? #127 opened on Nov 8...
Read more >
Can't quantize to swing. - Soundtrap Support
Quantizing to swing has weird problems. 1. sometimes when i quantize it affects other quantized tracks. 2. when i open "edit notes" on...
Read more >
Preparing a Model for Quantization - Neural Network Distiller
Distiller provides an automatic mechanism to convert a "vanilla" FP32 PyTorch model to a quantized counterpart (for quantization-aware training and ...
Read more >
Quantization - OpenVINO™ Documentation
The resulting "fakequantized" models can be interpreted and transformed to real low-precision models at runtime getting real performance improvement.
Read more >
Quantizing Resnet50 - NVIDIA Documentation Center
Post training quantization¶. For efficient inference, we want to select a fixed range for each quantizer. Starting with a pre-trained model, the simplest...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found