question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Warning: Explicit batch network detected and batch size specified, use enqueue without batch size instead.

See original GitHub issue

Description I created a Tensor RT plan using NetworkDefinitionCreationFlags::kEXPLICIT_BATCH, because the current master branch of the TensorRT-ONNX Parser does not allow implicit batch dimension. But there are no wildcards in any of the input dimensions.

TRTIS 19.10 can load the model, but on every inference the TensorRT Warning Warning: Explicit batch network detected and batch size specified, use enqueue without batch size instead. is logged. I suppose this is because IExecutionContext::enqueue is used instead of IExecutionContext::enqueueV2. Is this expected behavior, or am I doing something wrong?

Side question: Disregarding the warning, will this carry a performance penalty? Should I try to make my model use implicit batch dimensions, when not using dynamic input shapes?

TRTIS Information What version of TRTIS are you using? 19.10

Are you using the TRTIS container or did you build it yourself? NGC container 19.10-py3

To Reproduce Steps to reproduce the behavior: Create model with NetworkDefinitionCreationFlags::EXPLICIT_BATCH

Expected behavior No warning is emitted.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

3reactions
mrjackbocommented, Nov 17, 2019

Is there a reason you are using a fixed batch dimension in your model?

Yes, I misused the current version of NvOnnxParser. Now I managed to convert my model to explicit batch dimension -1, and everything works as expected.

TRTIS (and TensorRT) could document more clearly the relation between max_batch_size and EXPLICIT_BATCH. What does it mean to have max_batch_size = n, but explicit batch dimension 1? In this situation, I was able to send batches of size n, but only the first element of the batch was evaluated correctly, the other n-1 elements came back as 0.

In the end, I want to use explicit, but dynamic batch dimension, so I am ok with closing this issue. Thanks for the help.

1reaction
congcommented, Jul 19, 2021

please use executeV2

Read more comments on GitHub >

github_iconTop Results From Across the Web

Explicit batch network detected and batch size specified, use ...
WARNING : Explicit batch network detected and batch size specified, use enqueue without batch size instead. What is the reason which cause it?...
Read more >
TensorRT 7 ONNX models with variable batch size
I read in multiple forums that the batch size must be explicit when parsing ONNX models in TRT7. How should I solve this...
Read more >
TensorRT/ONNX - eLinux.org
[W] [TRT] Explicit batch network detected and batch size specified, use execute without batch size instead. as the log said, your onnx model ......
Read more >
NVIDIA TensorRT - manuals.plus
Dynamic Shapes By default, TensorRT optimizes the model based on the input shapes (batch size, image size, etc) at which it was defined....
Read more >
Torch-TensorRT (FX Frontend) User Guide - PyTorch
cuda_graph_batch_size: Cuda graph batch size, default to be -1. dynamic_batch: batch dimension (dim=0) is dynamic. Returns: A torch.nn.Module lowered by ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found