question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Per channel weight observer is not supported yet for ConvTranspose{n}d

See original GitHub issue

I’ve trained a custom Mask RCNN model and I’m trying to export that to Torchscript. I have 'model_final.pth' file. This is the code I’m trying (I don’t even know if this is true for custom training):

import copy
from detectron2.data import build_detection_test_loader
from d2go.export.api import convert_and_export_predictor
from d2go.utils.testing.data_loader_helper import create_fake_detection_data_loader
from d2go.export.d2_meta_arch import patch_d2_meta_arch

import logging

# disable all the warnings
previous_level = logging.root.manager.disable
logging.disable(logging.INFO)

patch_d2_meta_arch()

cfg_name = 'mask_rcnn_fbnetv3a_C4.yaml'
pytorch_model = model_zoo.get(cfg_name, trained=True)
pytorch_model.cpu()

with create_fake_detection_data_loader(224, 320, is_train=False) as data_loader:
    predictor_path = convert_and_export_predictor(
            model_zoo.get_config(cfg_name),
            copy.deepcopy(pytorch_model),
            "torchscript_int8@tracing",
            './',
            data_loader,
        )

# recover the logging level
logging.disable(previous_level)

The error I’m getting:

/usr/local/lib/python3.7/dist-packages/torch/quantization/qconfig.py in assert_valid_qconfig(qconfig, mod)
    126         )
    127         assert not is_per_channel, \
--> 128             'Per channel weight observer is not supported yet for ConvTranspose{n}d.'

AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d.

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

5reactions
wat3rBrocommented, Apr 13, 2021

@SuijkerbuijkP I see, I think it’s because the roi mask head is not built with FBNet builder and currently quantization is incompatible with that head, we’re working on a fix.

2reactions
wat3rBrocommented, May 6, 2021
Read more comments on GitHub >

github_iconTop Results From Across the Web

Model quantization fails, but the network architecture looks OK
I recieve the following error: AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d. This error occurs because ...
Read more >
发布 · Greenplum / Pytorch - GitCode
a.new(a, device='cpu') RuntimeError: Legacy tensor new of the form tensor.new(tensor, device=device) is not supported. Use torch.as_tensor(...) instead.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found