Contiguity problem: "RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input."
See original GitHub issueIt seems that LambaLayer breaks contiguity when I try it.
layer(x).is_contiguous()
>> False
I have to use .contiguous() where I train with it, is it normal?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
CUDNN_STATUS_NOT_SUPPO...
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input #32564.
Read more >CUDNN_STATUS_NOT_SUPPO...
cuDNN error : CUDNN_STATUS_NOT_SUPPORTED.This error may appear if you passed in a non-contiguous input ... I used the code CRNN and add horovod...
Read more >Error Message "CUDNN_STATUS_NOT_SUPPORTED ...
RuntimeError : cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input. Possible Causes.
Read more >cuDNN error: CUDNN_STATUS_NOT_INITIALIZED using ...
There is some discussion regarding this here. I had the same issue but using cuda 11.1 resolved it for me. This is the...
Read more >【三种可能问题】RuntimeError: cuDNN error - CSDN博客
pytorch训练人脸识别报错:cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous 处理方法:在代码中 ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Ok now it’s working properly without .contiguous(), thanks! Though, I don’t see much difference than with .contiguous() in training time or memory consumption so maybe it wasn’t that important to change … Anyway, thanks for the fix!
@Whiax ok, I reordered some einsums to avoid that contiguous call https://github.com/lucidrains/lambda-networks/releases/tag/0.2.2 let me know if that works for you!