Internal assert running model on Intel Arc GPU (1.10+gpu)
See original GitHub issueimport torch
import time
import intel_extension_for_pytorch as ipex
x=torch.jit.load('/usr/local/share/Imagus/face_model_137.dat', map_location='cpu')
inp=torch.randn(64,3,112,112)
x = x.to('xpu')
inp = inp.to('xpu')
with torch.no_grad():
x(inp)
Traceback (most recent call last): File “<stdin>”, line 1, in <module> File “/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) RuntimeError: isTuple()INTERNAL ASSERT FAILED at “/home/guizili/zhongruijie/frameworks.ai.pytorch.private-gpu/aten/src/ATen/core/ivalue_inl.h”:1400, please report a bug to PyTorch. Expected Tuple but got String
Same model mentioned in ‘maxpool_2d’ issue.
The assert happens as soon as either input or model are converted to XPU. Also, regardless of whether ipex.optimize is run.
On other models that run fine, I’ve noticed it also gives this error if input OR model is not converted but then works when they are both converted. To me this means perhaps the model did not get converted properly.
Issue Analytics
- State:
- Created 10 months ago
- Comments:6 (1 by maintainers)
The model uses a feature that was broken in MKLDNN before 1.12 so I can’t trace it from 1.10. I’m not sure if that also causes an issue in XPU but I was waiting for the 1.13 release to see if that just fixes it.
Oh I see, the links you provided in #261 for your model are no longer available. Would you also be able to provide some information on how you generated the problematic model, as well, since you mention that some of your other models don’t seem to run into this issue?