question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Could not find any implementation for node MaxPool on Jetson NX

See original GitHub issue

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

Jetson Xavier NX Jetpack: 4.6.1 CUDA: 10.2 TensorRT: 8.2.1.8

Converting mmdet model yolox raise exception: “Could not find any implementation for node MaxPool_102.”

Reproduction

python ./tools/deploy.py configs/mmdet/detection/base_tensorrt_static-640x640.py yolox_s_8x8_300e_coco.py yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth test.jpg --work-dir ./work-dir/ --device cuda:0 --dump-info

mmdeploy config as follow:

_base_ = ['../_base_/base_static.py', '../../_base_/backends/tensorrt.py']

onnx_config = dict(input_shape=(640, 640))

backend_config = dict(
    common_config=dict(max_workspace_size=1 << 30),
    model_inputs=[
        dict(
            input_shapes=dict(
                input=dict(
                    min_shape=[1, 3, 640, 640],
                    opt_shape=[1, 3, 640, 640],
                    max_shape=[1, 3, 640, 640])))
    ])

mmdet model is the official yolox model.

Environment

2022-09-16 07:00:29,768 - mmdeploy - INFO -

2022-09-16 07:00:29,768 - mmdeploy - INFO - **********Environmental information**********
2022-09-16 07:00:30,823 - mmdeploy - INFO - sys.platform: linux
2022-09-16 07:00:30,824 - mmdeploy - INFO - Python: 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) [GCC 9.4.0]
2022-09-16 07:00:30,825 - mmdeploy - INFO - CUDA available: True
2022-09-16 07:00:30,825 - mmdeploy - INFO - GPU 0: Xavier
2022-09-16 07:00:30,825 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda-10.2
2022-09-16 07:00:30,826 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 10.2, V10.2.300
2022-09-16 07:00:30,826 - mmdeploy - INFO - GCC: gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
2022-09-16 07:00:30,826 - mmdeploy - INFO - PyTorch: 1.10.0
2022-09-16 07:00:30,827 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 7.5
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_62,code=sm_62;-gencode;arch=compute_72,code=sm_72
  - CuDNN 8.2.1
    - Built with CuDNN 8.0
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=8.0.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -DMISSING_ARM_VST1 -DMISSING_ARM_VLD1 -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=ON, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON,

2022-09-16 07:00:30,827 - mmdeploy - INFO - TorchVision: 0.11.1
2022-09-16 07:00:30,828 - mmdeploy - INFO - OpenCV: 4.6.0
2022-09-16 07:00:30,828 - mmdeploy - INFO - MMCV: 1.6.1
2022-09-16 07:00:30,828 - mmdeploy - INFO - MMCV Compiler: GCC 7.5
2022-09-16 07:00:30,829 - mmdeploy - INFO - MMCV CUDA Compiler: 10.2
2022-09-16 07:00:30,829 - mmdeploy - INFO - MMDeploy: 0.8.0+a1a19f0
2022-09-16 07:00:30,829 - mmdeploy - INFO -

2022-09-16 07:00:30,829 - mmdeploy - INFO - **********Backend information**********
2022-09-16 07:00:33,716 - mmdeploy - INFO - onnxruntime: 1.10.0 ops_is_avaliable : False
2022-09-16 07:00:33,883 - mmdeploy - INFO - tensorrt: 8.2.1.8   ops_is_avaliable : True
2022-09-16 07:00:33,986 - mmdeploy - INFO - ncnn: None  ops_is_avaliable : False
2022-09-16 07:00:33,994 - mmdeploy - INFO - pplnn_is_avaliable: False
2022-09-16 07:00:34,002 - mmdeploy - INFO - openvino_is_avaliable: False
2022-09-16 07:00:34,119 - mmdeploy - INFO - snpe_is_available: False
2022-09-16 07:00:34,131 - mmdeploy - INFO - ascend_is_available: False
2022-09-16 07:00:34,138 - mmdeploy - INFO - coreml_is_available: False
2022-09-16 07:00:34,139 - mmdeploy - INFO -

2022-09-16 07:00:34,139 - mmdeploy - INFO - **********Codebase information**********
2022-09-16 07:00:34,149 - mmdeploy - INFO - mmdet:      2.25.1
2022-09-16 07:00:34,149 - mmdeploy - INFO - mmseg:      None
2022-09-16 07:00:34,150 - mmdeploy - INFO - mmcls:      None
2022-09-16 07:00:34,150 - mmdeploy - INFO - mmocr:      None
2022-09-16 07:00:34,150 - mmdeploy - INFO - mmedit:     None
2022-09-16 07:00:34,151 - mmdeploy - INFO - mmdet3d:    None
2022-09-16 07:00:34,151 - mmdeploy - INFO - mmpose:     0.28.1
2022-09-16 07:00:34,151 - mmdeploy - INFO - mmrotate:   None


### Error traceback

```Shell
2022-09-16 05:40:46,447 - mmdeploy - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
load checkpoint from local path: ../action-api/actionloop/engines/mmcfgs/yolox_s_8x8_300e_coco_20211121_095711-4592a793.pth
2022-09-16 05:40:58,194 - mmdeploy - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
2022-09-16 05:40:58,195 - mmdeploy - INFO - Export PyTorch model to ONNX: ./work-dir/obj-dynamic4/end2end.onnx.
2022-09-16 05:40:58,393 - mmdeploy - WARNING - Can not find torch._C._jit_pass_onnx_deduplicate_initializers, function rewrite will not be applied
/home/nvidia/mmdeploy/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  ys_shape = tuple(int(s) for s in ys.shape)
/home/nvidia/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/base.py:24: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  img_shape = [int(val) for val in img_shape]
/home/nvidia/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/base.py:24: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  img_shape = [int(val) for val in img_shape]
/home/nvidia/archiconda3/envs/mmdeploy/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/home/nvidia/mmdeploy/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:260: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]),
/home/nvidia/mmdeploy/mmdeploy/mmcv/ops/nms.py:178: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out_boxes = min(num_boxes, after_topk)
/home/nvidia/mmdeploy/mmdeploy/mmcv/ops/nms.py:181: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  (batch_size, out_boxes)).to(scores.device))
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
2022-09-16 05:41:30,497 - mmdeploy - INFO - Execute onnx optimize passes.
2022-09-16 05:41:32,095 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
2022-09-16 05:41:42,141 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess
2022-09-16 05:41:42,652 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/nvidia/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[09/16/2022-05:41:44] [TRT] [I] [MemUsageChange] Init CUDA: CPU +355, GPU +0, now: CPU 441, GPU 5334 (MiB)
[09/16/2022-05:41:45] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 441 MiB, GPU 5364 MiB
[09/16/2022-05:41:45] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 546 MiB, GPU 5471 MiB
[09/16/2022-05:41:46] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/16/2022-05:41:46] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[09/16/2022-05:41:46] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[09/16/2022-05:41:46] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[09/16/2022-05:41:46] [TRT] [I] Successfully created plugin: TRTBatchedNMS
[09/16/2022-05:41:46] [TRT] [I] ---------- Layers Running on DLA ----------
[09/16/2022-05:41:46] [TRT] [I] ---------- Layers Running on GPU ----------
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Reshape_0 + Transpose_1
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Reshape_2
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_3
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_4), Mul_5)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_6
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_7), Mul_8)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_12 || Conv_9
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_13), Mul_14)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_15
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_16), Mul_17)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_18
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_10), Mul_11)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_19), Mul_20), Add_21)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_23
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_24), Mul_25)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_26
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_27), Mul_28)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_32 || Conv_29
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_33), Mul_34)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_35
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_36), Mul_37)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_38
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_39), Mul_40), Add_41)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_42
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_43), Mul_44)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_45
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_46), Mul_47), Add_48)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_49
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_50), Mul_51)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_52
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_30), Mul_31)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_53), Mul_54), Add_55)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_57
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_58), Mul_59)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_60
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_61), Mul_62)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_66 || Conv_63
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_67), Mul_68)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_69
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_70), Mul_71)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_72
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_73), Mul_74), Add_75)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_76
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_77), Mul_78)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_79
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_80), Mul_81), Add_82)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_83
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_84), Mul_85)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_86
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_64), Mul_65)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(PWN(Sigmoid_87), Mul_88), Add_89)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_91
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_92), Mul_93)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_94
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_95), Mul_96)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_97
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_98), Mul_99)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] MaxPool_102
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] MaxPool_101
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] MaxPool_100
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 622 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 623 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 624 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 625 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_104
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_105), Mul_106)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_110 || Conv_107
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_111), Mul_112)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_113
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_114), Mul_115)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_116
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_108), Mul_109)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_117), Mul_118)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_120
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_121), Mul_122)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_123
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_124), Mul_125)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Resize_127
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 660 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_132 || Conv_129
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_133), Mul_134)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_135
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_136), Mul_137)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_138
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_130), Mul_131)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_139), Mul_140)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_142
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_143), Mul_144)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_145
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_146), Mul_147)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Resize_148
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 691 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_153 || Conv_150
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_154), Mul_155)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_156
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_157), Mul_158)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_159
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_151), Mul_152)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_160), Mul_161)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_163
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_164), Mul_165)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_206
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_166
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_207), Mul_208)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_167), Mul_168)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 686 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_173 || Conv_170
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_221 || Conv_215
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_174), Mul_175)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_222), Mul_223)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_224
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_176
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_225), Mul_226)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_177), Mul_178)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_179
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_228
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_171), Mul_172)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_180), Mul_181)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_183
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_184), Mul_185)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_209
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_186
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_210), Mul_211)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_187), Mul_188)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 655 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_193 || Conv_190
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_236 || Conv_230
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_194), Mul_195)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_237), Mul_238)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_239
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_196
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_240), Mul_241)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_197), Mul_198)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_199
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_243
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_191), Mul_192)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_200), Mul_201)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_203
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_204), Mul_205)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_212
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_213), Mul_214)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_251 || Conv_245
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_252), Mul_253)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_254
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_255), Mul_256)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_258
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] {ForeignNode[Transpose_291 + Reshape_292...Unsqueeze_340]}
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_216), Mul_217)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_218
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_219), Mul_220)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_229
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_227
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_231), Mul_232)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_233
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_234), Mul_235)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_244
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_242
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_246), Mul_247)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_248
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_249), Mul_250)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_259
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Conv_257
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_285 + Reshape_286
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_287 + Reshape_288
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_289 + Reshape_290
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 930 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 938 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] 946 copy
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_297 + Reshape_298
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_299 + Reshape_300
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Transpose_301 + Reshape_302
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(Sigmoid_306)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] Unsqueeze_338
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] PWN(PWN(Sigmoid_304), Mul_339)
[09/16/2022-05:41:46] [TRT] [I] [GpuLayer] TRTBatchedNMS_341
[09/16/2022-05:41:48] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +226, now: CPU 849, GPU 5776 (MiB)
[09/16/2022-05:41:48] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
1[09/16/2022-05:44:00] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node MaxPool_102.)
Process Process-3:
Traceback (most recent call last):
  File "/home/nvidia/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/nvidia/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/nvidia/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/home/nvidia/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 88, in onnx2tensorrt
    device_id=device_id)
  File "/home/nvidia/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 215, in from_onnx
    assert engine is not None, 'Failed to create TensorRT engine'
AssertionError: Failed to create TensorRT engine
2022-09-16 05:44:01,456 - mmdeploy - ERROR - `mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt` with Call id: 1 failed. exit.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:16 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
tx19990922commented, Sep 20, 2022

Can you elaborate on that, because I haven’t solved this problem yet, thanks.

generate engine file using command like this:

/usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEng
ine=end2end.engine

then you can use mmdeploy scritps to load this engine file, like test.py.

Thanks a lot , it works ,this question really bothered me for a long time.

0reactions
lijoe123commented, Sep 22, 2022

I had the same problem. But when i run this command , i had failed

image

Can you elaborate on that, because I haven’t solved this problem yet, thanks.

generate engine file using command like this:

/usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEng
ine=end2end.engine

then you can use mmdeploy scritps to load this engine file, like test.py.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Could not find any implementation for node #1768 - GitHub
You can work around this issue by doing one of these options: Reduce padding size to be smaller than the convolution kernel size....
Read more >
Error (Could not find any implementation for node ArgMax_260.)
Converting the mode with trtexec fails. I'm using the trtexec created with the build of the repo at GitHub - NVIDIA/TensorRT: TensorRT is...
Read more >
TensorRT YOLOv4 - JK Jung's blog
And I'd like to discuss some of the implementation details in this blog post. Reference. YOLOv4 on Jetson Nano · TensorRT ONNX YOLOv3...
Read more >
Benchmarking YoloV4 Models on an Nvidia Jetson Xavier NX
a little bit of context. This benchmark was conducted as part of a project building a demo application for CCTV footage anonymization on...
Read more >
Configuring balena os for the jetson-xavier-nx-devkit-emmc
Still it would be great if some one could point me in the direction to add support to the open-balen-api, since i could...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found