Questions about ragged batching of higher dimensional tensors
See original GitHub issueAfter reading the ragged batching documentation a couple of things still remain unclear to me.
The example provided is really quite helpful, but I would like to know how I can adapt it to my situation in which I have higher dimensional tensors. Let us say that, like the example in my situation I also send 3 requests but instead of sending shapes [1, 3], [1, 4], [1, 5]
I have shapes [1, 3, 32, 32, 1], [1, 4, 32, 32, 1], [1, 5, 32, 32, 1]
.
- Is ragged batching of higher dimensional tensors like this supported?
- How will it then batch the examples? Will it concatenate the examples in the first (batch) dimension:
[12, 32, 32, 1]
? Or will it concatenate and flatten all values into a big 1-dimensional tensor:[12*32*32*1]
and should I modify my model to reshape the ragged input tensor back to shape of[12, 32, 32, 1]
?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Ragged tensors | TensorFlow Core
Batching and unbatching Datasets with ragged tensors. Datasets with ragged tensors can be batched (which combines n consecutive elements into a single elements) ......
Read more >Introduction to ragged tensors | m2hycon Blog
This is necessary because TensorFlow groups batches of data together which must have the same shape in every dimension. A batch of the...
Read more >Newest 'ragged-tensors' Questions - Stack Overflow
I am training a deep learning model on stacks of images with variable dimensions. (Shape = [Batch, None, 256, 256, 1]), where None...
Read more >Compilation for Ragged Tensors with Minimal Padding - arXiv
In all cases, CORA is significantly better than the fully padded gemm operations, which perform worse at higher batch sizes as there is...
Read more >Ragged tensors - | notebook.community
RaggedTensor s supports multidimensional indexing and slicing, with one restriction: indexing into a ragged dimension is not allowed. This case is problematic ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I agree that your suggestion can solve this particular higher dimensional tensor scenario with existing ragged batch expressiveness, but as you mentioned this is not the generic solution as it can’t describe tensors with multiple variable dimensions properly. And since there is already the plan for adding the new kind of batch input, I would incline to keep the existing behavior and let the new batch input to address the multi-dimensional ragged batching.
Ok thanks for sharing your insights and plans for the future support of ragged batching of higher dimensional tensors.
I will close this issue now.