[TensorRT] ERROR: Internal error: could not find any implementation for node (Unnamed Layer* 202) [Deconvolution], try increasing the workspace size with IBuilder::setMaxWorkspaceSize()
See original GitHub issue[TensorRT] ERROR: Internal error: could not find any implementation for node (Unnamed Layer* 202) [Deconvolution], try increasing the workspace size with IBuilder::setMaxWorkspaceSize()
[TensorRT] ERROR: ../builder/tacticOptimizer.cpp (1230) - OutOfMemory Error in computeCosts: 0
I have a network (based on ERFNet) and some trained weights for this model. I’m trying to convert it from PyTorch to TensorRT and I do the following:
import torch
from torch2trt import torch2trt
import pdb
from NETWORK_DESIGN import Net
NUM_CLASSES = X
model = Net(NUM_CLASSES)
model_url = torch.load("/path/to/model/model_best.pth")
model.load_state_dict(model_url, strict=False)
model.cuda().half().eval()
x = torch.ones((1, 3, WIDTH, HEIGHT)).cuda().half()
model_trt = torch2trt(model, [x], fp16_mode=True)
pdb.set_trace()
But I get the above error. Is there an unsupported layer in here somewhere? I don’t believe there to be any fancy layers in this architecture. I’m happy to share both the network design and weights files. I tested this both on my Laptop (MX150) and a Jetson Xavier (Jetpack 4.2).
The network was trained using PyTorch 1.1.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:9 (3 by maintainers)
Top Results From Across the Web
[TensorRT] ERROR: Internal error: could not find any ...
[Deconvolution], try increasing the workspace size with IBuilder::setMaxWorkspaceSize() Hello guys ! I am getting this error, I have checked ...
Read more >Error occurred while running the TensorRT samples: [reformat ...
TensorRT Version: 8.2.0.3 CPU Architecture: aarch64 GPU Type: GTX 1060 ... When I run the TensorRT samples, the following error occurs:
Read more >Additional Options for TensorRT Optimized Models - DLR
Using TensorRT enables Neo compiled models to obtain the best possible performance on NVIDIA GPUs. The first inference after loading the model may...
Read more >jetson-inference error code 10 internal error (could not find ...
git 下载一个项目,编译报错如下ERROR: Could not find method implementation() for arguments [directory 'libs'] on object of type org.gradle.api.
Read more >Super PINTO on Twitter: "あぁぁ。。。TensorRT Error Code ...
TensorRT Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_1310 + Transpose_1311...Transpose_2302 ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi all,
I believe the solution was to increase max workspace size. You can do this by setting max_workspace_size parameter. For example
‘’model_trt = torch2trt(model, [data], max_workspace_size=1<<25)’’
Should work.
Best, John
This answer really help me a lot. Thanks!