Expected Tuple but got GenericDict
See original GitHub issueDescription Running a Pytorch script model in Triton throws the following error. The script model runs outside Triton in Pytorch without any problem. I think the problem is related to the fact that the model is returing a dict. Is there a way to work around ?
I0920 07:13:48.044669 418 libtorch_backend.cc:776] isTuple() INTERNAL ASSERT FAILED at "/opt/tritonserver/include/torch/ATen/core/ivalue_inl.h":842, please report a bug to PyTorch. Expected Tuple but got GenericDict
Exception raised from toTuple at /opt/tritonserver/include/torch/ATen/core/ivalue_inl.h:842 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f213a82094b in /opt/tritonserver/lib/pytorch/libc10.so)
frame #1: <unknown function> + 0x2802c5 (0x7f21d9abe2c5 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #2: <unknown function> + 0x286e4d (0x7f21d9ac4e4d in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #3: <unknown function> + 0x98000 (0x7f21d98d6000 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #4: <unknown function> + 0xafaf7 (0x7f21d98edaf7 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #5: <unknown function> + 0xbd6df (0x7f21d87da6df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #6: <unknown function> + 0x76db (0x7f21d96266db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x3f (0x7f21d7e97a3f in /lib/x86_64-linux-gnu/libc.so.6)
Triton Information What version of Triton are you using? nvcr.io/nvidia/tritonserver:20.08-py3
Are you using the Triton container or did you build it yourself? Container
To Reproduce
Steps to reproduce the behavior.
Do not know yet how to narrow down the issue.
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Framework is Pytorch 1.6 ,
config.pbtxt files is :
platform: "pytorch_libtorch"
max_batch_size: 0
input [ {
name: "input__0"
data_type: TYPE_FP32
dims: -1
dims: 3
dims: -1
dims: -1
}
]
output [
{
name: "output__0"
data_type: TYPE_FP32
dims: -1
dims: 100
dims: -1
},
{
name: "output__1"
data_type: TYPE_FP32
dims: -1
dims: 100
dims: 4
}
]
Expected behavior Should not throw exception.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:7 (5 by maintainers)
Top GitHub Comments
The Libtorch (PyTorch) backend operates with the assumption that the returned value from the model is a tuple and not a Generic Dict. We don’t have a plan (at the moment) to support non tuple values. I’d recommend you to build a wrapper around your model and trace it to produce a version of your model where the returned response is a Tuple instead of Dictionary.
Hi @CoderHam , I have a related question, does the tensorflow backend (savedmodel) support Generic Dicts ? Thanks.