Torch `rearrange` throws warning about incorrect division when running `torch.jit.trace`
See original GitHub issueDescribe the bug
When running torch.jit.trace
on a nn.Module
that contains a rearrange
operation, the following warning is raised:
/home/shogg/.cache/pypoetry/virtualenvs/mldi-96vt4Weu-py3.8/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
Two questions arise from this:
- Why is there a division step happening?
- What is the potential impact?
Reproduction steps Code to reproduce this is as follows:
class TestModule(nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
return einops.rearrange(x, '1 i f x y -> 1 i x y f')
test_tensor = torch.rand([1, 2, 376, 16, 16])
test_module = TestModule()
output_module = torch.jit.trace(test_module, test_tensor)
Expected behavior rearrange shouldn’t be throwing this warning
Your platform
python==3.8.0
torch==1.9.0
einops==0.3.0
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
torch.jit.trace — PyTorch 1.13 documentation
Tracing only records operations done when the given function is run on the given tensors. Therefore, the returned ScriptModule will always run the...
Read more >torch.jit - Enchanter documentation
To disable trace checking,"\ " pass check_trace=False to torch.jit.trace()" warnings.warn(nondeterministic_ops_warning, category=TracerWarning, ...
Read more >Python API: test/test_jit.py Source File - Caffe2
53 from torch.jit.annotations import BroadcastingList2, BroadcastingList3 ... 232 # Running JIT passes requires that we own the graph (with a shared_ptr).
Read more >PyTorch 1.7.0 Now Available | Exxact Blog
Such case is not properly handled by the autograd and can lead to internal errors or wrong gradients. So, as a side effect...
Read more >torch.Tensor — PyTorch master documentation
dtype , consider using to() method on the tensor. Warning. Current implementation of torch.Tensor introduces memory overhead, thus it might lead to unexpectedly ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Ok, the pytorch issue is open - hope this helps
torch team decided to follow normal warn-change behavior policy.
Einops wasn’t affected and wouldn’t be affected by this torch transition. Warning are however still popping up during tracing (and will for a while).
Leaving this issue open for those who may encounter this in the future.