question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

SATRN to TensorRT stuck

See original GitHub issue

Thanks for your bug report. We appreciate it a lot.

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

A clear and concise description of what the bug is.

Stuck or the conversion does not move after printing the logs below. I also tried ot with dbnet model and it works fine.

load checkpoint from local path: ..\models\pth\satrn\satrn_small_20211009-2cf13355.pth
2022-08-17 14:30:16,155 - mmdeploy - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
2022-08-17 14:30:16,156 - mmdeploy - INFO - Export PyTorch model to ONNX: work_dir\satrn\end2end.onnx.
e:\mmdeploy\mmdeploy\codebase\mmocr\models\text_recognition\base.py:51: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  img_shape = [int(val) for val in img_shape]
e:\mmdeploy\mmdeploy\codebase\mmocr\models\text_recognition\base.py:51: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  img_shape = [int(val) for val in img_shape]
e:\mmocr\mmocr\models\textrecog\encoders\satrn_encoder.py:76: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  valid_width = min(w, math.ceil(w * valid_ratio))
e:\mmocr\mmocr\models\textrecog\encoders\satrn_encoder.py:76: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  valid_width = min(w, math.ceil(w * valid_ratio))
e:\mmocr\mmocr\models\textrecog\decoders\nrtr_decoder.py:126: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  valid_width = min(T, math.ceil(T * valid_ratio))
e:\mmocr\mmocr\models\textrecog\decoders\nrtr_decoder.py:126: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  valid_width = min(T, math.ceil(T * valid_ratio))

Reproduction

  1. What command or script did you run?
cd /d e:\mmdeploy
python ./tools/deploy.py ^
    "configs\mmocr\text-recognition\text-recognition_onnxruntime_dynamic.py" ^
    "..\mmocr\configs\textrecog\satrn\satrn_small.py" ^
    "..\models\pth\satrn\satrn_small_20211009-2cf13355.pth" ^
    "..\test\text-recog-1.png" ^
    --work-dir work_dir\satrn ^
    --device cuda ^
    --log-level DEBUG ^
    --dump-info
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
No

Environment

  1. Please run python tools/check_env.py to collect necessary environment information and paste it here.
2022-08-17 14:40:33,581 - mmdeploy - INFO -
2022-08-17 14:40:33,581 - mmdeploy - INFO - **********Environmental information**********
2022-08-17 14:40:41,082 - mmdeploy - INFO - sys.platform: win32
2022-08-17 14:40:41,082 - mmdeploy - INFO - Python: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:35:01) [MSC v.1916 64 bit (AMD64)]
2022-08-17 14:40:41,082 - mmdeploy - INFO - CUDA available: True
2022-08-17 14:40:41,083 - mmdeploy - INFO - GPU 0: NVIDIA GeForce RTX 2060
2022-08-17 14:40:41,083 - mmdeploy - INFO - CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5
2022-08-17 14:40:41,083 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11.5, V11.5.119
2022-08-17 14:40:41,083 - mmdeploy - INFO - MSVC: Microsoft (R) C/C++ Optimizing Compiler Version 19.29.30141 for x64
2022-08-17 14:40:41,083 - mmdeploy - INFO - GCC: n/a
2022-08-17 14:40:41,084 - mmdeploy - INFO - PyTorch: 1.11.0+cu115
2022-08-17 14:40:41,084 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - C++ Version: 199711
  - MSVC 192829337
  - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e)
  - OpenMP 2019
  - LAPACK is enabled (usually provided by MKL)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.5
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.3.2
  - Magma 2.5.4
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.5, CUDNN_VERSION=8.3.2, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/builder/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,

2022-08-17 14:40:41,085 - mmdeploy - INFO - TorchVision: 0.12.0+cu115
2022-08-17 14:40:41,086 - mmdeploy - INFO - OpenCV: 4.6.0
2022-08-17 14:40:41,086 - mmdeploy - INFO - MMCV: 1.5.3
2022-08-17 14:40:41,086 - mmdeploy - INFO - MMCV Compiler: MSVC 192930140
2022-08-17 14:40:41,086 - mmdeploy - INFO - MMCV CUDA Compiler: 11.5
2022-08-17 14:40:41,087 - mmdeploy - INFO - MMDeploy: 0.7.0+9fbfdd2
2022-08-17 14:40:41,087 - mmdeploy - INFO -

2022-08-17 14:40:41,087 - mmdeploy - INFO - **********Backend information**********
2022-08-17 14:40:42,247 - mmdeploy - INFO - onnxruntime: 1.10.0 ops_is_avaliable : True
2022-08-17 14:40:42,301 - mmdeploy - INFO - tensorrt: 8.4.0.6   ops_is_avaliable : True
2022-08-17 14:40:42,406 - mmdeploy - INFO - ncnn: None  ops_is_avaliable : False
2022-08-17 14:40:42,419 - mmdeploy - INFO - pplnn_is_avaliable: False
2022-08-17 14:40:42,432 - mmdeploy - INFO - openvino_is_avaliable: False
2022-08-17 14:40:42,550 - mmdeploy - INFO - snpe_is_available: False
2022-08-17 14:40:42,550 - mmdeploy - INFO -

2022-08-17 14:40:42,551 - mmdeploy - INFO - **********Codebase information**********
2022-08-17 14:40:44,847 - mmdeploy - INFO - mmdet:      2.25.0
2022-08-17 14:40:44,847 - mmdeploy - INFO - mmseg:      None
2022-08-17 14:40:44,847 - mmdeploy - INFO - mmcls:      0.23.1
2022-08-17 14:40:44,848 - mmdeploy - INFO - mmocr:      0.6.1
2022-08-17 14:40:44,848 - mmdeploy - INFO - mmedit:     None
2022-08-17 14:40:44,848 - mmdeploy - INFO - mmdet3d:    None
2022-08-17 14:40:44,848 - mmdeploy - INFO - mmpose:     None
2022-08-17 14:40:44,848 - mmdeploy - INFO - mmrotate:   0.3.2
  1. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source] pip
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback

If applicable, paste the error trackback here.

No traceback available, just the logs above.

Bug fix

If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
AllentDancommented, Aug 17, 2022

Converting SATRN model consumes more time than DBNet. How long did your codes stuck?

0reactions
AllentDancommented, Aug 17, 2022

Close since resolved.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Audio2Face Building TensorRT engine is stuck!
It's perfectly working now. So, in conclusion, it is pretty normal for it to take this long (Hours in my case!)
Read more >
How to Diagnose a Loose or Stuck Belt Tensioner ... - YouTube
Shop for New Auto Parts at 1AAuto.com http://1aau.to/c/105/ad/serpentine-belt-tensionerDid you know that squealing belts or pulleys may ...
Read more >
Stuck on "Building The TensorRT OSS Components" #619
I am very new to the NVIDIA community and wanted to get my Jetson Nano up and running TensorRT I am getting an...
Read more >
QA Inference on BERT using TensorRT - | notebook.community
Overview. Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains ...
Read more >
Yolov2 Frozen Graph To Tensorrt graph - Stack Overflow
I am converted the yolov2 frozen graph to tftrt graph using following code. OUTPUT_NAME = ["models/convolutional23/BiasAdd"] # read ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found