ValueError: __len__() should return >= 0
See original GitHub issuewhen use torch2trt convert the torch.eq, error occurs.
mm = torch.eq(mm, 0.)
mm is tensor and mm.shape = [3136, 1, 3, 3]
File “/media/cfs/torch2trt-master/examples/inpainting/model.py”, line 329, in forward mm = torch.eq(mm, nn) File “./torch2trt/torch2trt.py”, line 285, in wrapper converter"converter" File “./torch2trt/converters/compare.py”, line 26, in convert_gt return convert_elementwise(ctx, trt.ElementWiseOperation.EQUAL) File “./torch2trt/converters/compare.py”, line 9, in convert_elementwise input_a_trt, input_b_trt = broadcast_trt_tensors(ctx.network, [input_a_trt, input_b_trt], len(output.shape) - 1) File “./torch2trt/torch2trt.py”, line 170, in broadcast_trt_tensors if len(t.shape) < broadcast_ndim: ValueError: len() should return >= 0
Issue Analytics
- State:
- Created 3 years ago
- Comments:13 (2 by maintainers)
Top GitHub Comments
@jaybdub @whcjb
I found out the problem is the trt not being able to process slice() operator in the same fashion torch does.
The network I was trying to port crashed on torch.add() operation between two tensors, while converting minimal torch.add op worked like a charm.
My model was cutting spatial dimensions using python slice() operation, instead of torch.narrow recommended for tensors.
To check this is the culprit I wrote and tested 2 versions of a network that narrows dims and adds them together:
I think the screen is self-explanatory, here’s a gist to reproduce this.
I’m not sure where to go from here, there should be some type check for slice within the lib, hope it helps.
Best Regards
EDIT:
I looked at the last screen and see that tensors are not matching between TRT and normal model, which is weird? I was sure that they were while writing this…
I meet same problem too, wish for some solution @jaybdub 0.0