torch._dynamo.exc.Unsupported: dynamic shapes: arange
See original GitHub issue🐛 Describe the bug
While giving a try with pytorch2 on OpenNMT-py
using these two lines:
rawmodel = build_model(model_opt, opt, vocabs, checkpoint)
model = torch.compile(rawmodel, fullgraph=True, backend='nvprims_aten')
Error logs
getting this:
from user code:
File "/home/vincent/nlp/OpenNMT-py/onmt/encoders/transformer.py", line 126, in forward
mask = ~sequence_mask(src_len).unsqueeze(1)
File "/home/vincent/nlp/OpenNMT-py/onmt/utils/misc.py", line 58, in sequence_mask
return (torch.arange(0, max_len, device=lengths.device)
Set torch._dynamo.config.verbose=True for more information
You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True
Minified repro
No response
Issue Analytics
- State:
- Created 9 months ago
- Comments:14 (8 by maintainers)
Top Results From Across the Web
Unsupported data type for TPU String · Issue #579 - GitHub
Getting this error when running baselines/cifar10/deterministic.py tensorflow/compiler/jit/xla_compilation_cache.cc:334] Compiled cluster ...
Read more >TorchDynamo Update 5: Improved Capture & Bigger Graphs
With static shapes, TorchDynamo can constant-propagate this stuff away, however, with dynamic shapes it will break the graph. Zach has some ...
Read more >Untitled
There can only be one btd5 walkthrough, Orange five pink, Ventral primary ramus, ... Gatos bebes asustados, Todo sobre microsoft office excel 2010, ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
because of this: https://github.com/pytorch/pytorch/issues/90170
not so easy to run a minifier but will try.
okay, I managed to tweak triton to recognize ptxas 7.8 / cuda 11.8 and inductor mode does not trigger any error. However nothing seems happening got a loop with both the TypedStorage warning and: [2022-12-09 13:33:41,820] torch._inductor.lowering: [WARNING] using triton random, expect difference from eager no log from my training loop. nvidia-smi show some activty (both ram and util) but without any other message difficult to go further.
EDIT: I’ll try module by module to see where the problem could be.