problem about function : view()
See original GitHub issue2:torch.Size([4, 256]) (256,) t–>trt: (256,) [TensorRT] ERROR: (Unnamed Layer* 177) [Shuffle]: uninferred dimensions are not an exact divisor of input dimensions, so inferred dimension cannot be calculated torch.Size([1, 4, 256]) (0) t–>trt: ()
when I run view() from tensor[4,256] to [1,4,256], some problem happens----->t._trt.shape becomes 0!!!
code: in torch2trt.py `def add_missing_trt_tensors(network, tensors): “”“Creates missing TensorRT tensors as constants and attaches them to the Torch Tensors”“” trt_tensors = [None] * len(tensors)
dtype = check_torch_dtype(*tensors)
for i, t in enumerate(tensors):
trt_tensor = None
# GET TRT TENSOR (OR CREATE TRT CONSTANT)
# get tensor w/ _trt
# or... add constant for scalar primitive
if isinstance(t, float) or isinstance(t, int):
shape = (1,)
scalar = t * torch.ones(shape, dtype=dtype).cpu().numpy()
trt_tensor = network.add_constant(shape, scalar).get_output(0)
elif hasattr(t, "_trt"):
print('2:',t.shape,t._trt.shape)
trt_tensor = t._trt
print('t-->trt:',trt_tensor.shape)
# or... add constant for leaf tensor w/o _trt
else:
# remove all preceding ones, these can be re-inserted later when broadcasting
num_preceding_ones = 0
for j in range(t.ndim):
if int(t.shape[j]) == 1:
num_preceding_ones += 1
else:
break
shape = tuple(t.shape[num_preceding_ones:])
weight = t.detach().cpu().numpy()
t._trt = network.add_constant(shape, weight).get_output(0)
trt_tensor = t._trt
assert trt_tensor is not None
trt_tensors[i] = trt_tensor
return trt_tensors`
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
R-Studio Error: could not find function "view" - tools
I am working in R-Studio. I am trying to make a data frame and then run the “view” command. But it is not...
Read more >View() is causing an error - RStudio IDE
You might have accidentally overloaded the View() function (maybe a package you've loaded has a function called View() in it which is masking ......
Read more >Function problem
In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every...
Read more >Using view() function in R [closed]
Using view() function in R - Stack Overflow. Stack Overflow for Teams – Start collaborating and sharing organizational knowledge.
Read more >R Error: Could not Find Function X (2 Examples) - YouTube
How to solve the R programming error “could not find function X” in the R programming language. ... 3K views 1 year ago...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hmm. I see, so in you’re model you’re actually running on a batch of 4 images, and then permuting the batch dimension.
Unfortunately, permuting the batch dimension is not currently supported by torch2trt.
That said, if resnet50 is your backbone on a batch of 4 images, I would expect this final layer is a small fraction of the computation. You could convert the backbone model with torch2trt, and run the linear layer in PyTorch. For example,
and then convert like
which could be later loaded like
You could also convert the linear layer with torch2trt, but you may not see much benefit.
Please let me know if this helps or you run into any issues.
Best, John
@jaybdub It works as you suggested, thanks!!!