question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Trace warnings when trying to jit.trace a model

See original GitHub issue

First of all, hats off for your effort on building and maintaining this. Keep up the good work.

My issue is when I try to jit.trace a model that uses this layer, I get an error similar to this one,

dsntnn.py:47: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! return torch.linspace(first, last, length, device=device)

This also happens when trying to export the onnx model from a model that uses dsntnn, so basically a model that we try to export to onnx with a command like this, will give this trace warning, making it impossible to load the exported model.

torch.onnx.export(model, x, "deployment/ckpts/{0}.onnx".format(model_name), export_params=False, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)

How to reproduce,

model = CoordRegressionNetwork(n_locations=2)
x = Variable(torch.randn(5, 3, 200, 200, requires_grad=True))
traced_script_module = torch.jit.trace(model, x)

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:11 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ThejanWcommented, Feb 7, 2019

No worries. Fixed this by just loading the weights on the network definition that only returns unnormalized heatmaps in the forward function (so no dsntnn functions are involved in the tracing process).

Closing the issue, Thanks

0reactions
anibalicommented, May 18, 2020

As of https://github.com/anibali/dsntnn/commit/93acc46e224f9170f2bd719f7baf8531dca177c4 tracing seems to work correctly, and is tested.

Read more comments on GitHub >

github_iconTop Results From Across the Web

torch.jit.trace — PyTorch 1.13 documentation
If you trace such models, you may silently get incorrect results on subsequent invocations of the model. The tracer will try to emit...
Read more >
Converting a tensor to a Python boolean might cause the trace ...
This warning occurs, when one tries to torch.jit.trace models which have data dependent control flow. This simple example should make it ...
Read more >
Mastering TorchScript: Tracing vs Scripting, Device Pinning ...
Due to how tracing can simplify model behavior, each warning should be fully understood and only then ignored (or fixed). Also, be sure...
Read more >
TorchScript: Tracing vs. Scripting - Yuxin's Blog
Try to convince you that torch.jit.trace should be preferred over torch.jit.script for deployment of non-trivial models.
Read more >
RuntimeError: Caught an unknown exception! - mixing torch.jit ...
I tried quantizing the following EfficientDet model by running the ... I'm wondering if vai_q_pytorch supports mixing torch.jit tracing and ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found