question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Convert TSM pytorch model to onnx

See original GitHub issue

I have succees converting the tsm pytorch to onnx. but the onnx output is different with the pytorch. when converting. it show me that:

TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:37: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
  out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
  out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
**I didnot use the inplace shift**
  out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift
        out[:, :-1, :fold] = x[:, 1:, :fold] # shift left
        out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
        out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift

I think this may be caused by the shift operation. but I dont think it is a inplace op. it seems that the onnx not support the slice op.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6

github_iconTop GitHub Comments

5reactions
Usernamezhxcommented, Jul 23, 2020

@ChiShao you can change https://github.com/mit-han-lab/temporal-shift-module/blob/f09f42db80f1dbeaf9c7448fbd491cd59043e711/ops/temporal_shift.py#L27 to this:

    def shift(x, n_segment, fold_div=3):
        nt, c, h, w = x.size()
        n_batch = nt // n_segment
        x = x.view(n_batch, n_segment, c, h, w)

        fold = c // fold_div

        left_side = torch.cat((x[:, 1:, :fold], torch.zeros(1, 1, fold, h, w)), dim=1)
        middle_side = torch.cat((torch.zeros(1, 1, fold, h, w), x[:, :n_segment - 1, fold: 2 * fold]), dim=1)
        out = torch.cat((left_side, middle_side, x[:, :, 2 * fold:]), dim=2)

        return out.view(nt, c, h, w)
0reactions
bujianyiwangcommented, Nov 16, 2022

@Usernamezhx can you provide the code of coverting to onnx?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Convert TSM pytorch model to onnx - jit
I want to convert tsm model https://github.com/mit-han-lab/temporal-shift-module to onnx. when convert it show me that TracerWarning: ...
Read more >
Convert your PyTorch training model to ONNX - Microsoft Learn
To export a model, you will use the torch.onnx.export() function. This function executes the model, and records a trace of what operators are ......
Read more >
Tutorial 6: Exporting a model to ONNX
First, install onnx. We provide a python script to export the pytorch model trained by MMAction2 to ONNX. Optional arguments: --shape : The...
Read more >
Transform a PyTorch model to onnx - Medium
Photo by Andy Brunner on Unsplash. In this tutorial, I want to show how easily you can transform a PyTorch model to the...
Read more >
data: kMIN dimensions in profile 0 are [24,3224224] but input ...
Description Hi all. I have converted a PyTorch model to trt directly in python, without using ONNX or trtexec (because it has some...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found