Unclear torch model failure message
See original GitHub issueDescription The above message was observed in the output log . Wondering what is causing that and how to fix.
Triton Information What version of Triton are you using? nvcr.io/nvidia/tritonserver:20.08-py3
Are you using the Triton container or did you build it yourself? container
To Reproduce Not really sure
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Framework : pytorch_libtorch
config.pbtxt:
platform: "pytorch_libtorch"
max_batch_size: 0
input [
{
name: "input__0"
data_type: TYPE_FP32
dims: 3
dims: -1
dims: -1
}
]
output [
{
name: "output__0"
data_type: TYPE_FP32
dims: 1
dims: 100
},
{
name: "output__1"
data_type: TYPE_FP32
dims: 1
dims: 100
},
{
name: "output__2"
data_type: TYPE_FP32
dims: 1
dims: 100
dims: 4
}
]
Expected behavior Should not see reference to CPU.
Issue Analytics
- State:
- Created 3 years ago
- Comments:15 (8 by maintainers)
Top Results From Across the Web
Confused about Torch.Jit.Script error message
Hello, good morning. I am relatively new to Pytorch but liking it a lot! I have a BERT classification model that I am...
Read more >PyTorch : error message "torch has no [...] member"
The error is raised because of Pylint (Python static code analysis tool) not recognizing rand as the member function.
Read more >Neuron Runtime Troubleshooting on Inf1
If the driver is not installed then Neuron Runtime wont able to access the Neuron devices and will fail with an error message...
Read more >CUDA Error: Device-Side Assert Triggered: Solved | Built In
Inconsistency between the number of labels/classes and the number of output units; The input of the loss function may be incorrect.
Read more >PyTorch 1.10.0 Now Available - Exxact Corporation
Fixed dimension in the error message for CUDA torch.cat shape check and removed unnecessary offending index information (#64556). Improved ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Did you use
--gpus=1
flag when running the container?Fixed by https://github.com/triton-inference-server/server/pull/2173