question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

convert yolact to ONNX

See original GitHub issue

Hello again, I’m try to convert yolact to ONNX with the following code:

weights_path = '/home/ws/DL/yolact/weights/yolact_im700_54_800000.pth'

import torch
import torch.onnx
import yolact
import torchvision

model = yolact.Yolact()

# state_dict = torch.load(weights_path)
# model.load_state_dict(state_dict)

model.load_weights(weights_path)

dummy_input = torch.randn(1, 3, 640, 480)

torch.onnx.export(model, dummy_input, "onnx_model_name.onnx")

error msg:

/home/ws/DL/yolact/yolact.py:256: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for j, i in product(range(conv_h), range(conv_w)):
/home/ws/DL/yolact/yolact.py:279: TracerWarning: torch.Tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  self.priors = torch.Tensor(prior_data).view(-1, 4)
/home/ws/DL/yolact/yolact.py:279: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  self.priors = torch.Tensor(prior_data).view(-1, 4)
/home/ws/DL/yolact/layers/functions/detection.py:74: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for batch_idx in range(batch_size):
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-2-a796dc0eef97> in <module>
     13 dummy_input = torch.randn(1, 3, 700, 700)
     14 
---> 15 torch.onnx.export(model, dummy_input, "onnx_model_name.onnx")

~/.local/lib/python3.6/site-packages/torch/onnx/__init__.py in export(*args, **kwargs)
     23 def export(*args, **kwargs):
     24     from torch.onnx import utils
---> 25     return utils.export(*args, **kwargs)
     26 
     27 

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
    129             operator_export_type=operator_export_type, opset_version=opset_version,
    130             _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
--> 131             strip_doc_string=strip_doc_string)
    132 
    133 

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
    361                                                         output_names, operator_export_type,
    362                                                         example_outputs, propagate,
--> 363                                                         _retain_param_name, do_constant_folding)
    364 
    365         # TODO: Don't allocate a in-memory string for the protobuf

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _model_to_graph(model, args, verbose, training, input_names, output_names, operator_export_type, example_outputs, propagate, _retain_param_name, do_constant_folding, _disable_torch_constant_prop)
    264             model.graph, tuple(args), example_outputs, False, propagate)
    265     else:
--> 266         graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
    267         state_dict = _unique_state_dict(model)
    268         params = list(state_dict.values())

~/.local/lib/python3.6/site-packages/torch/onnx/utils.py in _trace_and_get_graph_from_model(model, args, training)
    223     # training mode was.)
    224     with set_training(model, training):
--> 225         trace, torch_out = torch.jit.get_trace_graph(model, args, _force_outplace=True)
    226 
    227     if orig_state_dict_keys != _unique_state_dict(model).keys():

~/.local/lib/python3.6/site-packages/torch/jit/__init__.py in get_trace_graph(f, args, kwargs, _force_outplace, return_inputs)
    229     if not isinstance(args, tuple):
    230         args = (args,)
--> 231     return LegacyTracedModule(f, _force_outplace, return_inputs)(*args, **kwargs)
    232 
    233 

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

~/.local/lib/python3.6/site-packages/torch/jit/__init__.py in forward(self, *args)
    292         try:
    293             trace_inputs = _unflatten(all_trace_inputs[:len(in_vars)], in_desc)
--> 294             out = self.inner(*trace_inputs)
    295             out_vars, _ = _flatten(out)
    296             torch._C._tracer_exit(tuple(out_vars))

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    489             hook(self, input)
    490         if torch._C._get_tracing_state():
--> 491             result = self._slow_forward(*input, **kwargs)
    492         else:
    493             result = self.forward(*input, **kwargs)

~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
    479         tracing_state._traced_module_stack.append(self)
    480         try:
--> 481             result = self.forward(*input, **kwargs)
    482         finally:
    483             tracing_state.pop_scope()

~/DL/yolact/yolact.py in forward(self, x)
    615                 pred_outs['conf'] = F.softmax(pred_outs['conf'], -1)
    616 
--> 617             return self.detect(pred_outs)
    618 
    619 

~/DL/yolact/layers/functions/detection.py in __call__(self, predictions)
     73 
     74             for batch_idx in range(batch_size):
---> 75                 decoded_boxes = decode(loc_data[batch_idx], prior_data)
     76                 result = self.detect(batch_idx, conf_preds, decoded_boxes, mask_data, inst_data)
     77 

RuntimeError: isTensor() ASSERT FAILED at /pytorch/aten/src/ATen/core/ivalue.h:209, please report a bug to PyTorch. (toTensor at /pytorch/aten/src/ATen/core/ivalue.h:209)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f721e0ac441 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f721e0abd7a in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x979ad2 (0x7f721d130ad2 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #3: torch::jit::tracer::getNestedValueTrace(c10::IValue const&) + 0x41 (0x7f721d3939a1 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #4: <unknown function> + 0xa7651b (0x7f721d22d51b in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #5: <unknown function> + 0xa766db (0x7f721d22d6db in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #6: <unknown function> + 0x457942 (0x7f725d6d2942 in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x130cfc (0x7f725d3abcfc in /home/ws/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #8: _PyCFunction_FastCallDict + 0x35c (0x56204c in /usr/bin/python3)
frame #9: /usr/bin/python3() [0x5a1501]
frame #10: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #11: /usr/bin/python3() [0x5136c6]
frame #12: _PyObject_FastCallKeywords + 0x19c (0x57ec0c in /usr/bin/python3)
frame #13: /usr/bin/python3() [0x4f88ba]
frame #14: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #15: _PyFunction_FastCallDict + 0xf5 (0x4f4065 in /usr/bin/python3)
frame #16: /usr/bin/python3() [0x5a1481]
frame #17: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #18: /usr/bin/python3() [0x513601]
frame #19: _PyObject_FastCallKeywords + 0x19c (0x57ec0c in /usr/bin/python3)
frame #20: /usr/bin/python3() [0x4f88ba]
frame #21: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #22: /usr/bin/python3() [0x4f6128]
frame #23: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #24: /usr/bin/python3() [0x5a1481]
frame #25: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #26: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #27: /usr/bin/python3() [0x4f6128]
frame #28: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #29: /usr/bin/python3() [0x5a1481]
frame #30: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #31: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #32: /usr/bin/python3() [0x4f6128]
frame #33: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #34: /usr/bin/python3() [0x5a1481]
frame #35: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #36: /usr/bin/python3() [0x513601]
frame #37: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #38: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #39: /usr/bin/python3() [0x4f6128]
frame #40: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #41: /usr/bin/python3() [0x5a1481]
frame #42: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #43: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #44: /usr/bin/python3() [0x4f6128]
frame #45: _PyFunction_FastCallDict + 0x2fe (0x4f426e in /usr/bin/python3)
frame #46: /usr/bin/python3() [0x5a1481]
frame #47: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #48: /usr/bin/python3() [0x513601]
frame #49: PyObject_Call + 0x3e (0x57c2fe in /usr/bin/python3)
frame #50: _PyEval_EvalFrameDefault + 0x1851 (0x4facb1 in /usr/bin/python3)
frame #51: /usr/bin/python3() [0x4f6128]
frame #52: /usr/bin/python3() [0x4f7d60]
frame #53: /usr/bin/python3() [0x4f876d]
frame #54: _PyEval_EvalFrameDefault + 0x1260 (0x4fa6c0 in /usr/bin/python3)
frame #55: /usr/bin/python3() [0x4f7a28]
frame #56: /usr/bin/python3() [0x4f876d]
frame #57: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #58: /usr/bin/python3() [0x4f6128]
frame #59: /usr/bin/python3() [0x4f7d60]
frame #60: /usr/bin/python3() [0x4f876d]
frame #61: _PyEval_EvalFrameDefault + 0x467 (0x4f98c7 in /usr/bin/python3)
frame #62: /usr/bin/python3() [0x4f6128]
frame #63: /usr/bin/python3() [0x4f7d60]

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:64 (2 by maintainers)

github_iconTop GitHub Comments

30reactions
Ma-Dancommented, Jul 22, 2019

The environment I used: onnx 1.4.1 onnxruntime 0.4.0 torch 1.0.1 torchvision 0.2.1

Run python eval.py --trained_model=weights/yolact_darknet53_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg to generate onnx file. And run python onnxeval.py --trained_model=weights/yolact_resnet50_54_800000.pth --score_threshold=0.3 --top_k=100 --cuda=False --image=dog.jpg to evaluate with onnx.

2reactions
HoangTienDuccommented, Feb 28, 2020

@JING switch to branch onnx. =)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Converting a PyTorch YOLACT Model
Export the model to ONNX format. Apply the YOLACT_onnx_export.patch patch to the repository. Refer to the Create a Patch File instructions if you...
Read more >
Convert PyTorch Model to ONNX Model - Documentation
General Steps. To convert a PyTorch model to an ONNX model, you need both the PyTorch model and the source code that generates...
Read more >
Convert your PyTorch training model to ONNX - Microsoft Learn
How to Convert your PyTorch model to the ONNX model format, to integrate with a Windows ML app.
Read more >
Convert Pytorch Yolact (.pth) model for deployment in C++
onnx converted model from yolact.pth. So my issue is I don't really understand what to do next to deploy it using C++. Should...
Read more >
onnx2tf - PyPI
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found