question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Allow batch size 0 in ensembles

See original GitHub issue

This is related to #443 and #509

The use case is as follows: I want to take the output of an object detector A, feed it to a custom backend B which creates crops from the original image and feed the crops to a further model C.

This works wonderfully using TRTIS’ ensemble scheduling. With one exception: The output of the custom backend has shape [N,3,h,w] where N is the number of detected bounding boxes. If there is no detection, I would like to write an “empty” tensor as output, e.g. a tensor of shape [0,3,h,w], skip evaluation of model C and set the outputs of C also to be of batch size 0. In the custom backend I request such a buffer (of size 0) from TRTIS. But on inference TRTIS complains that the input to model C is too small.

My workaround is to always write some crop and just ignore the output of C if no bounding boxes were found. But this is clearly not optimal.

Is a fix for this possible/in the pipeline?

Edit: I should have said that model C is a TensorRT plan with optimization profile with minimal input shape [1,3,h,w]. TensorRT understandably does not allow minimal input shape [0,3,h,w].

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:20 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
NotAnyMikecommented, Jun 1, 2022

Has something changed lately on this topic? I have faced this issue several times and is actually stopping us from fully moving to triton

1reaction
deadeyegoodwincommented, Dec 19, 2019

That would be very specific behavior for TRTIS to implement… and then we would have to construct some appropriate outputs even-though the model itself didn’t run. It is unlikely that this is something we would put in the ensembler. I think your current workaround is reasonable given the TensorRT limitation of not allowing zero-sized dimensions.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Allow batch size 0 in ensembles · Issue #968 - GitHub
In the custom backend I request such a buffer (of size 0) from TRTIS. But on inference TRTIS complains that the input to...
Read more >
How to use Different Batch Sizes when Training and ...
The batch size limits the number of samples to be shown to the network before a weight update can be performed. This same...
Read more >
Allowed batch size for Dynamic Batch Size confusing
I've just started using dynamic batch sizes using Python and OpenVINO 2020.3. My Python version is 3.7.6, and my numpy version is 1.18.1....
Read more >
Q-Ensemble for Offline RL: Don't Scale the ... - OpenReview
We show that simply scaling the mini-batch size and naively adjusting the learning rate allows for (1) a reduced size of the Q-ensemble,...
Read more >
Batch size not working - Salesforce Developer Community
I can't seem to change the batch size on my select calls. ... soapenv:mustUnderstand="0" xmlns:ns1="SforceService">
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found