question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!

See original GitHub issue

I convert a pytorch model to onnx.

example = torch.rand(10, 3, 224, 224)

torch.onnx.export(net,               # model being run
                  example,                         # model input (or a tuple for multiple inputs)
                  "./infer/tsm_resnet50.onnx",   # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=10,          # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['input'],   # the model's input names
                  output_names = ['output'], # the model's output names
                #   operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK,
                  dynamic_axes={'input' : {0 : 'batch_size'},    # variable lenght axes
                                'output' : {0 : 'batch_size'}})

And then it show me that: this is my log file log.txt the problem snippet:

            out = torch.zeros_like(x)
            out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
            out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
            out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift

how can replace it ? thanks my version: python3.6 torch 1.2.0 torchvison 0.4.0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:20
  • Comments:18 (3 by maintainers)

github_iconTop GitHub Comments

13reactions
yesid-acmcommented, Jun 16, 2021

Anyone explain it, please? it´s an issue for prediction ? how to solve it?

7reactions
miketrimmelcommented, Sep 23, 2020

I´m facing the same warning with Python=3.7.9, pytorch=1.6.0, onnxruntime=1.4.0, onxruntime-tools=1.4.2 when converting a bert model.


def export_onnx_bert_model(model, onnx_model_path, max_seq_len):
    with torch.no_grad():
        inputs = {"input_ids":      torch.ones(1, max_seq_len, dtype=torch.int64),
                  "attention_mask": torch.ones(1, max_seq_len, dtype=torch.int64),
                  "token_type_ids": torch.ones(1, max_seq_len, dtype=torch.int64)}
        
        outputs = model(**inputs)
        symbolic_names = {0: "batch_size", 1: "max_seq_len"}
        torch.onnx.export(model,                                            
                          (inputs["input_ids"],                             
                           inputs["attention_mask"],
                           inputs["token_type_ids"]),                      
                          onnx_model_path,                                  
                          opset_version=11,                                 
                          do_constant_folding=True,                        
                          input_names=['input_ids',                         
                                       'input_mask',
                                       'segment_ids'],
                          output_names=['output'],                         
                          dynamic_axes={'input_ids': symbolic_names,
                                        'input_mask' : symbolic_names,
                                        'segment_ids' : symbolic_names})
        print("ONNX Model exported to {0}".format(onnx_model_path))

export_onnx_bert_model(bert_model, "bert.onnx", MAX_SEQ_LEN)

TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! position_ids = self.position_ids[:, :seq_length]

TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! input_tensor.shape == tensor_shape for input_tensor in input_tensors

Read more comments on GitHub >

github_iconTop Results From Across the Web

Converting a tensor to a Python boolean might cause the trace ...
TracerWarning : Converting a tensor to a Python boolean might cause the trace to be incorrect. We cant record the data flow of...
Read more >
Converting a tensor to a Python index might cause the trace to ...
TracerWarning : Converting a tensor to a Python index might cause the trace to be incorrect...This means that the trace might not generalize...
Read more >
CompVis/stable-diffusion-v1-4 · Help with an error
We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means...
Read more >
TracerWarning: Converting a tensor to a Python index might ...
TracerWarning : Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of...
Read more >
8. Basic MERA Manipulations & Optimization - quimb
We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found