Per channel weight observer is not supported yet for ConvTranspose{n}d
See original GitHub issueI’ve trained a custom Mask RCNN model and I’m trying to export that to Torchscript. I have 'model_final.pth'
file.
This is the code I’m trying (I don’t even know if this is true for custom training):
import copy
from detectron2.data import build_detection_test_loader
from d2go.export.api import convert_and_export_predictor
from d2go.utils.testing.data_loader_helper import create_fake_detection_data_loader
from d2go.export.d2_meta_arch import patch_d2_meta_arch
import logging
# disable all the warnings
previous_level = logging.root.manager.disable
logging.disable(logging.INFO)
patch_d2_meta_arch()
cfg_name = 'mask_rcnn_fbnetv3a_C4.yaml'
pytorch_model = model_zoo.get(cfg_name, trained=True)
pytorch_model.cpu()
with create_fake_detection_data_loader(224, 320, is_train=False) as data_loader:
predictor_path = convert_and_export_predictor(
model_zoo.get_config(cfg_name),
copy.deepcopy(pytorch_model),
"torchscript_int8@tracing",
'./',
data_loader,
)
# recover the logging level
logging.disable(previous_level)
The error I’m getting:
/usr/local/lib/python3.7/dist-packages/torch/quantization/qconfig.py in assert_valid_qconfig(qconfig, mod)
126 )
127 assert not is_per_channel, \
--> 128 'Per channel weight observer is not supported yet for ConvTranspose{n}d.'
AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d.
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (3 by maintainers)
Top Results From Across the Web
Model quantization fails, but the network architecture looks OK
I recieve the following error: AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d. This error occurs because ...
Read more >发布 · Greenplum / Pytorch - GitCode
a.new(a, device='cpu') RuntimeError: Legacy tensor new of the form tensor.new(tensor, device=device) is not supported. Use torch.as_tensor(...) instead.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@SuijkerbuijkP I see, I think it’s because the roi mask head is not built with FBNet builder and currently quantization is incompatible with that head, we’re working on a fix.
@smahesh2694 Hi we’ve updated the
mask_rcnn_fbnetv3a_C4.yaml
(https://github.com/facebookresearch/d2go/commit/477ab964e2165cb586b5c00425f6e463d7edeadd) and now it should work with quantization using qnnpack. There’s also a test for it https://github.com/facebookresearch/d2go/blob/2366ab940d6d87cc2b03f8a6c97d5fc9aed56c62/tests/modeling/test_meta_arch_rcnn.py#L39-L58