question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error exporting YoloX model to ONNX

See original GitHub issue

I was trying to do onnx export of yolox model - but i got an error.

The script that I used is: python tools/deployment/pytorch2onnx.py
configs/yolox/yolox_s_8x8_300e_coco.py
work_dirs/yolox_s_lite/latest.pth
–output-file output.onnx
–input-img demo/demo.jpg
–dynamic-export
–show
–verify
–simplify

But then I get this error: File “/data/ssd/files/a0393608/work/code/github/openmmlab/mmdetection/tools/deployment/pytorch2onnx.py”, line 330, in <module> normalize_cfg = parse_normalize_cfg(cfg.test_pipeline) File “/data/ssd/files/a0393608/work/code/github/openmmlab/mmdetection/tools/deployment/pytorch2onnx.py”, line 218, in parse_normalize_cfg assert len(norm_config_li) == 1, ‘norm_config should only have one’ AssertionError: norm_config should only have one

This is because yolox doesn’t have input Normalization. I worked around that by providing a dummy normalization. But then I got another error.

File “/data/ssd/files/a0393608/work/code/github/openmmlab/mmdetection/mmdet/models/detectors/base.py”, line 169, in forward return self.onnx_export(img[0], img_metas[0]) File “/data/ssd/files/a0393608/work/code/github/openmmlab/mmdetection/mmdet/models/detectors/single_stage.py”, line 169, in onnx_export *outs, img_metas, with_nms=with_nms) File “/user/a0393608/work/apps/miniconda3/envs/edgeai-mmdetection/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py”, line 186, in new_func return old_func(*args, **kwargs) File “/data/ssd/files/a0393608/work/code/github/openmmlab/mmdetection/mmdet/models/dense_heads/base_dense_head.py”, line 492, in onnx_export bboxes = self.bbox_coder.decode( File “/user/a0393608/work/apps/miniconda3/envs/edgeai-mmdetection/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 1178, in getattr type(self).name, name)) AttributeError: ‘YOLOXHead’ object has no attribute ‘bbox_coder’

Environment sys.platform: linux Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] CUDA available: True GPU 0,1,2,3: RTX A4000 CUDA_HOME: /usr/local/cuda NVCC: Build cuda_11.1.TC455_06.29190527_0 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.10.0+cu111 PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel® Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel® 64 architecture applications
  • Intel® MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX512
  • CUDA Runtime 11.1
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  • CuDNN 8.0.5
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.11.0a0+972e9af OpenCV: 4.5.4 MMCV: 1.4.2 MMCV Compiler: GCC 7.5 MMCV CUDA Compiler: 11.1 MMDetection: 2.20.0+ff9bc39

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:24

github_iconTop GitHub Comments

1reaction
VJoercommented, Feb 11, 2022

very thank u share u code,I successfully used it to export the onnx model today. I wish you a happy Chinese New Year in advance.

请问您是用的哪个版本的mmdetection,我在最新版本上按照上面的代码修改了yolox.py和pytorch2onnx.py文件,还是会报错AttributeError: ‘YOLOXHead’ object has no attribute ‘bbox_coder’。麻烦看一下是不是还有其他地方需要修改。谢谢

1reaction
mathmanucommented, Jan 19, 2022

YOLOX does not use image normalization, so another minor change is needed in tools/deployment/pytorch2onnx.py in function parse_normalize_cfg() as shown below.

def parse_normalize_cfg(test_pipeline):
    transforms = None
    for pipeline in test_pipeline:
        if 'transforms' in pipeline:
            transforms = pipeline['transforms']
            break
    assert transforms is not None, 'Failed to find `transforms`'
    norm_config_li = [_ for _ in transforms if _['type'] == 'Normalize']
    assert len(norm_config_li) <= 1, '`norm_config` should only have one'
    norm_config = norm_config_li[0] if len(norm_config_li)>0 else dict(mean=0.0, std=1.0)
    return norm_config

With these two changes, onnx export of YOLOX should work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

TDA4VM: Yolox : Error when running the "tools/export_onnx.py"
We are working on the TDA4VM. I cloned the code from the url "github.com/.../edgeai-yolox". But when running the cmd : python3 tools/export_onnx ...
Read more >
YOLOX-ONNXRuntime in Python
This doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion. Download...
Read more >
How to Convert a PyTorch Model to ONNX in 5 Minutes - Deci AI
The next step is to use the `torch.onnx.export` function to convert the model to ONNX. This function requires the following data:.
Read more >
ONNX exporting error - PyTorch Forums
I try exporting to onnx model from pytorch. Here is my code: import torch from darknet import Darknet det_model = Darknet(".
Read more >
End-to-End Object Detection for Unity With IceVision and ...
Train a YOLOX model using IceVision and export it to OpenVINO. ... We can use the onnx-simplifier package to tidy up the exported...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found