question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Exporting a model to ONNX; onnx.optimizer

See original GitHub issue

Exporting a model to ONNX leads to an error due to onnx.optimizer being missing, as these packages have been removed from onnx >= 1.9 (exists in onnx<=1.8.1) and moved to onnxoptimizer.

Running with onnx==1.8.1 does not solve the issue. It throws an IndexError: Input is undefined, but I have managed to solve it using the latest onnx and onnxoptimizer.

Instructions To Reproduce the Issue:

Using detectron 0.5 and torch 1.9.0.

Example ONNX model export:

#wget http://images.cocodataset.org/val2017/000000439715.jpg -q -O input.jpg

import onnx
import cv2
import torch

from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.export import export_onnx_model
from detectron2.modeling import build_model
from detectron2.checkpoint import DetectionCheckpointer
import detectron2.data.transforms as T

im = cv2.imread("./input.jpg")

cfg = get_cfg()
# add project-specific config 
cfg.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
# Find a model from detectron2's model zoo.
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
cfg.MODEL.DEVICE='cpu'

# Build model and prepare input
model = build_model(cfg)
model.eval()
checkpointer = DetectionCheckpointer(model)
checkpointer.load(cfg.MODEL.WEIGHTS)
aug = T.ResizeShortestEdge([cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST],
                           cfg.INPUT.MAX_SIZE_TEST)
height, width = im.shape[:2]
image = aug.get_transform(im).apply_image(im)
image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
inputs = {"image": image, "height": height, "width": width}

# Export to Onnx model
onnxModel = export_onnx_model(cfg, model, [inputs])
onnx.save(onnxModel, "test.onnx")

For me, this fails in the onnx.optimizer package with an IndexError, when it appears it should work correctly.

By installing the latest onnx (1.10.1) and onnxoptimizer (0.2.6) packages, I solved it as follows:

  • First, detectron2/export/caffe2_export.py fails with No module named 'onnx.optimizer', so I edit to instead import onnxoptimizer, and similarly replace in lines 68 and 71.

  • Next we have caffe2/python/onnx/backend.py also missing onnx.optimizer, so replace import and usage there as well. (I know this is part of the pytorch library).

Finally, the above code now runs successfully and exports the model as expected.

Environment:

"collect_env.py" [1]
----------------------  ---------------------------------------------------------------------------------------
sys.platform            linux
Python                  3.9.6 (default, Aug 18 2021, 19:38:01) [GCC 7.5.0]
numpy                   1.20.3
detectron2              0.5 
Compiler                GCC 9.3
CUDA compiler           CUDA 11.2
detectron2 arch flags   8.6
DETECTRON2_ENV_MODULE   <not set>
PyTorch                 1.9.0 
PyTorch debug build     False
GPU available           Yes
GPU 0                   NVIDIA GeForce RTX 3090 (arch=8.6)
Driver version          465.19.01
CUDA_HOME               /usr/local/cuda
Pillow                  8.3.1
torchvision             0.10.0 
torchvision arch flags  3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
fvcore                  0.1.5.post20210825
iopath                  0.1.9
cv2                     4.5.3
----------------------  ---------------------------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2021.3-Product Build 20210617 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.0.5
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

If it’s not possible to merge these changed into a pull request, as it is contingent on pytorch as well, then I have posted to help someone with a similar issue.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:4
  • Comments:6

github_iconTop GitHub Comments

4reactions
muhammadAgfian96commented, Feb 18, 2022

Can you describe in detail how you did it?

Thank you very much for your reply

need install: pip install onnx pip install onnxoptimizer

you can check the path detectron2.path then you can edit the <fullpath>/detectron2/export/caffe2_export.py: import onnxoptimizer change onnx.optimizer to onnxoptimizer

caffe2.path then you can edit the <fullpath>/caffe2/python/onnx/backend.py: import onnxoptimizer change onnx.optimizer to onnxoptimizer

0reactions
ff137commented, Nov 23, 2022

Hello, I have one question. By following this way to generate onnx model are you able to run inference on the onnx model in onnxruntime without dependency on detectron2 ?

It appears that the missing import bug should now be resolved, and you won’t need to follow these steps. Unfortunately I don’t know for certain if you can run it in onnxruntime without dependency on detectron2, but it’s probably possible? Good luck!

Read more comments on GitHub >

github_iconTop Results From Across the Web

(optional) Exporting a Model from PyTorch to ONNX and ...
To export a model, we call the torch.onnx.export() function. This will execute the model, recording a trace of what operators are used to...
Read more >
Export to ONNX - Transformers - Hugging Face
In this guide, we'll show you how to export Transformers models to ONNX (Open Neural Network eXchange). Once exported, a model can be...
Read more >
ONNX export Optimizer — mmdeploy 0.11.0 documentation
ONNX export Optimizer. This is a tool to optimize ONNX model when exporting from PyTorch. Installation. Build MMDeploy with torchscript support:.
Read more >
Exporting your model to ONNX format - Unity - Manual
To use your trained neural network in Unity, you need to export it to the ONNX format. ONNX (Open Neural Network Exchange) is...
Read more >
Convert your PyTorch training model to ONNX - Microsoft Learn
To export a model, you will use the torch.onnx.export() function. This function executes the model, and records a trace of what operators are ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found