question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Torch `rearrange` throws warning about incorrect division when running `torch.jit.trace`

See original GitHub issue

Describe the bug When running torch.jit.trace on a nn.Module that contains a rearrange operation, the following warning is raised:

/home/shogg/.cache/pypoetry/virtualenvs/mldi-96vt4Weu-py3.8/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)

Two questions arise from this:

  1. Why is there a division step happening?
  2. What is the potential impact?

Reproduction steps Code to reproduce this is as follows:

class TestModule(nn.Module):
    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return einops.rearrange(x, '1 i f x y -> 1 i x y f')

test_tensor = torch.rand([1, 2, 376, 16, 16])
test_module = TestModule()
output_module = torch.jit.trace(test_module, test_tensor)

Expected behavior rearrange shouldn’t be throwing this warning

Your platform

python==3.8.0
torch==1.9.0
einops==0.3.0

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:9 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
StephenHoggcommented, Aug 4, 2021

Ok, the pytorch issue is open - hope this helps

0reactions
arogozhnikovcommented, Nov 9, 2022

torch team decided to follow normal warn-change behavior policy.

Einops wasn’t affected and wouldn’t be affected by this torch transition. Warning are however still popping up during tracing (and will for a while).

Leaving this issue open for those who may encounter this in the future.

Read more comments on GitHub >

github_iconTop Results From Across the Web

torch.jit.trace — PyTorch 1.13 documentation
Tracing only records operations done when the given function is run on the given tensors. Therefore, the returned ScriptModule will always run the...
Read more >
torch.jit - Enchanter documentation
To disable trace checking,"\ " pass check_trace=False to torch.jit.trace()" warnings.warn(nondeterministic_ops_warning, category=TracerWarning, ...
Read more >
Python API: test/test_jit.py Source File - Caffe2
53 from torch.jit.annotations import BroadcastingList2, BroadcastingList3 ... 232 # Running JIT passes requires that we own the graph (with a shared_ptr).
Read more >
PyTorch 1.7.0 Now Available | Exxact Blog
Such case is not properly handled by the autograd and can lead to internal errors or wrong gradients. So, as a side effect...
Read more >
torch.Tensor — PyTorch master documentation
dtype , consider using to() method on the tensor. Warning. Current implementation of torch.Tensor introduces memory overhead, thus it might lead to unexpectedly ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found