Question about linear quantization
See original GitHub issueI figure out the procedure of linear quantization and reproduce the experiments,
- Search the quantization strategy on the imagenet100 dataset.
- Finetune the model on the whole imagenet dataset with the strategy obtained from step 1.
It seems like the final accuracy of the quantized model is more dependent on the fine-tuning.
Another question is why the bit reduction process starts from the last layer as the _final_action_wall
function shows.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:11
Top Results From Across the Web
QUANTIZATION questions & answers for quizzes ... - Quizizz
Q. Select the quantization techniques.. answer choices. Linear quantization. Uniform quantization.
Read more >What is the difference between Linear Quantization and Non ...
It's pretty simple really. With linear quantization every increment in the sampled value corresponds to a fixed size analogue increment.
Read more >Quantization (signal processing) - Wikipedia
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to...
Read more >Linear quantization in quantum electrodynamics?
Quantization is done for the free theory, without interactions, and this free theory is linear. For a scalar boson, for instance, each composant ......
Read more >Linear Quantization by Effective Resistance Sampling
QUANTIZED LINEAR SENSING. ❖ The linear model: ❖ The quantized sensing problem: ✴ Measurements of y cannot be made in arbitrary precision.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@alan303138
The QConv2d inherits from the QModule base class. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L363 The construction function of QConv2d initializes the w_bit=-1, which will first initialize the self._w_bit = w_bit in QModule, i.e., self._w_bit = w_bit=-1 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L366
When running the forward function of QConv2d, it will first call self._quantize_activation(inputs=inputs) then self._quantize_weight(weight=weight). https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L395
Take self._quantize_weight(weight=weight) as example, now self._w_bit = w_bit=-1, it will jump to line 315 and return weights without quantization. https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L287 https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/lib/utils/quantize_utils.py#L315
Putting them all together, if we do not use half-precision (fp16, see --half flag) and do not specify the w_bit and a_bit for each QConv2d and QLinear layer, the qmobilenetv2 will not be quantized.
According to run_pretrain.sh and pretrain.py, the pre-trained file mobiletv2-150.pth.tar seems to use fp16. Therefore, the mobiletv2-150.pth.tar file might be unsuitable for linear quantization. You can load the mobiletv2-150.pth.tar and insert some prints before https://github.com/mit-han-lab/haq/blob/8228d126c800446fdc1cc263555d1e5aed7d9dfd/models/mobilenetv2.py#L192 to check it out.
@87Candy I have not encountered this error.
self._build_state_embedding()
function is used to build the ten-dimensional feature vector as the paper section 3.1 shows, you can check it.