question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

TVMError: src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX

See original GitHub issue

Hi everyone.

I got such error reproducing toy example from nnvm but with my own model. Calling

m.run()

I get the error similar to https://github.com/dmlc/tvm/pull/315#issuecomment-322024643:

TVMError: [09:11:33] src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX

Can you clarify me what can be wrong now?

Thanks in advance!


BTW, I’m a bit confused by tvm.gpu() docstring 😃:

Construct a CPU device

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:14 (6 by maintainers)

github_iconTop GitHub Comments

6reactions
expectopatronmcommented, Jan 30, 2020

I get the exact same issue.

jetson@jetson:~/fast-depth/deploy$ python3 tx2_run_tvm.py --input-fp data/rgb.npy --output-fp data/pred.npy --model-dir …/results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ --cuda True => [TVM on TX2] using model files in …/results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ => [TVM on TX2] loading model lib and ptx => [TVM on TX2] loading model graph and params => [TVM on TX2] creating TVM runtime module => [TVM on TX2] feeding inputs and params into TVM module => [TVM on TX2] running TVM module, saving output Traceback (most recent call last):

File “tx2_run_tvm.py”, line 91, in <module> main()

File “tx2_run_tvm.py”, line 88, in main run_model(args.model_dir, args.input_fp, args.output_fp, args.warmup, args.run, args.cuda, try_randin=args.randin)

File “tx2_run_tvm.py”, line 36, in run_model run() # not gmodule.run()

File “/home/jetson/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /home/jetson/tvm/build/libtvm.so(TVMFuncCall+0x70) [0x7fad7ccec0] [bt] (2) /home/jetson/tvm/build/libtvm.so(std::Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocatortvm::runtime::detail::ArgConvertCode > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xe8) [0x7fad850b08] [bt] (1) /home/jetson/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x6cc) [0x7fad85093c] [bt] (0) /home/jetson/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4c) [0x7facfdebac] File “/home/jetson/tvm/src/runtime/cuda/cuda_module.cc”, line 110 File “/home/jetson/tvm/src/runtime/library_module.cc”, line 91 CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module[device_id]), data.c_str()) failed with error: CUDA_ERROR_INVALID_PTX

Still haven’t found a solution to it. I am runnig it on a Jetson Nano. Please help.

0reactions
tiandiao123commented, Jul 28, 2020

I get the exact same issue.

jetson@jetson:~/fast-depth/deploy$ python3 tx2_run_tvm.py --input-fp data/rgb.npy --output-fp data/pred.npy --model-dir …/results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ --cuda True => [TVM on TX2] using model files in …/results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ => [TVM on TX2] loading model lib and ptx => [TVM on TX2] loading model graph and params => [TVM on TX2] creating TVM runtime module => [TVM on TX2] feeding inputs and params into TVM module => [TVM on TX2] running TVM module, saving output Traceback (most recent call last):

File “tx2_run_tvm.py”, line 91, in main()

File “tx2_run_tvm.py”, line 88, in main run_model(args.model_dir, args.input_fp, args.output_fp, args.warmup, args.run, args.cuda, try_randin=args.randin)

File “tx2_run_tvm.py”, line 36, in run_model run() # not gmodule.run()

File “/home/jetson/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /home/jetson/tvm/build/libtvm.so(TVMFuncCall+0x70) [0x7fad7ccec0] [bt] (2) /home/jetson/tvm/build/libtvm.so(std::Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocatortvm::runtime::detail::ArgConvertCode > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xe8) [0x7fad850b08] [bt] (1) /home/jetson/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x6cc) [0x7fad85093c] [bt] (0) /home/jetson/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4c) [0x7facfdebac] File “/home/jetson/tvm/src/runtime/cuda/cuda_module.cc”, line 110 File “/home/jetson/tvm/src/runtime/library_module.cc”, line 91 CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module[device_id]), data.c_str()) failed with error: CUDA_ERROR_INVALID_PTX

Still haven’t found a solution to it. I am runnig it on a Jetson Nano. Please help.

did you find some solution? I have exact same issue. I don’t know how to fix it, could you help me?

Read more comments on GitHub >

github_iconTop Results From Across the Web

TVMError: src/runtime/cuda/cuda_module.cc:93 ...
Hi everyone. I got such error reproducing toy example from nnvm but with my own model. Calling m.run() I get the error similar...
Read more >
TVMError: src/runtime/cuda/cuda_module.cc:93 ...
expectopatronm commented on issue #1027: TVMError: src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), ...
Read more >
Confusing error "CUDA_ERROR_INVALID_PTX"
I have two computer. Computer A has two 24G memory nvidia gpu (RTX 6000). Computer B only has a 2000M memory nvidia gpu...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found