question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Expected Tuple but got GenericDict

See original GitHub issue

Description Running a Pytorch script model in Triton throws the following error. The script model runs outside Triton in Pytorch without any problem. I think the problem is related to the fact that the model is returing a dict. Is there a way to work around ?

I0920 07:13:48.044669 418 libtorch_backend.cc:776] isTuple() INTERNAL ASSERT FAILED at "/opt/tritonserver/include/torch/ATen/core/ivalue_inl.h":842, please report a bug to PyTorch. Expected Tuple but got GenericDict
Exception raised from toTuple at /opt/tritonserver/include/torch/ATen/core/ivalue_inl.h:842 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f213a82094b in /opt/tritonserver/lib/pytorch/libc10.so)
frame #1: <unknown function> + 0x2802c5 (0x7f21d9abe2c5 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #2: <unknown function> + 0x286e4d (0x7f21d9ac4e4d in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #3: <unknown function> + 0x98000 (0x7f21d98d6000 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #4: <unknown function> + 0xafaf7 (0x7f21d98edaf7 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #5: <unknown function> + 0xbd6df (0x7f21d87da6df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #6: <unknown function> + 0x76db (0x7f21d96266db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #7: clone + 0x3f (0x7f21d7e97a3f in /lib/x86_64-linux-gnu/libc.so.6)

Triton Information What version of Triton are you using? nvcr.io/nvidia/tritonserver:20.08-py3

Are you using the Triton container or did you build it yourself? Container

To Reproduce Steps to reproduce the behavior.
Do not know yet how to narrow down the issue.

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Framework is Pytorch 1.6 ,
config.pbtxt files is :

platform: "pytorch_libtorch"
max_batch_size: 0
input [ {
        name: "input__0"
        data_type: TYPE_FP32
        dims:  -1
        dims:   3
        dims:  -1
        dims:  -1
      }
]
output [
      {
        name: "output__0"
        data_type: TYPE_FP32
        dims: -1
        dims: 100
        dims: -1
      },
      {
        name: "output__1"
        data_type: TYPE_FP32
        dims: -1
        dims: 100
       dims:  4
      }
]                 

Expected behavior Should not throw exception.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:2
  • Comments:7 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
CoderHamcommented, Sep 20, 2020

The Libtorch (PyTorch) backend operates with the assumption that the returned value from the model is a tuple and not a Generic Dict. We don’t have a plan (at the moment) to support non tuple values. I’d recommend you to build a wrapper around your model and trace it to produce a version of your model where the returned response is a Tuple instead of Dictionary.

0reactions
scamianbascommented, Jun 29, 2021

The Libtorch (PyTorch) backend operates with the assumption that the returned value from the model is a tuple and not a Generic Dict. We don’t have a plan (at the moment) to support non tuple values. I’d recommend you to build a wrapper around your model and trace it to produce a version of your model where the returned response is a Tuple instead of Dictionary.

Hi @CoderHam , I have a related question, does the tensorflow backend (savedmodel) support Generic Dicts ? Thanks.

Read more comments on GitHub >

github_iconTop Results From Across the Web

terminate called after throwing an instance of 'c10::Error' what ...
Expected Tuple but got GenericList Exception raised from toTuple at /home/wenda/libtorch/include/ATen/core/ivalue_inl.h:927 (most recent call first): frame ...
Read more >
How to extract output of torch model in c++? - Stack Overflow
In this case the answer of model is tuple of 2 images. We can extract them by such way: torch::Tensor t0 = output.toTuple()->elements()[0]....
Read more >
Expected Tensor but got GenericList - PyTorch Forums
Hi, I converted the pytorch model to torch script.Loading the model using c++ was also successful. But while doing the inference,I got such ......
Read more >
jit.rs - source
GenericDict (Vec<(IValue, IValue)>), } impl IValue { fn type_str(self) -> &'static ... "unable to unpack ivalue, expected a tuple of len 2 got...
Read more >
typing — Support for type hints — Python 3.11.1 documentation
In the function greeting , the argument name is expected to be of type str and the return type str . Subtypes are...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found