CUBLAS ERROR while running example
See original GitHub issueWhen running the first example, prediction to distogram, I get the following error. Is this error reproducible for you guys? It looks like its crashing in the trunk part for the attention qkv remapping.
torch 1.8.0 cuda 10.2 python 3.6.9
Traceback (most recent call last):
File "test.py", line 25, in <module>
msa_mask = msa_mask
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 840, in forward
msa_mask = msa_mask
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 511, in forward
x = attn(x, shape = seq_shape, mask = mask) + x
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 45, in forward
return self.fn(x, *args, **kwargs)
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 107, in forward
return attn(x, *args, shape = shape, mask = mask, **kwargs)
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 411, in forward
w_out = self.attn_width(w_x, mask = w_mask)
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/af/alphafold2_pytorch/alphafold2.py", line 235, in forward
q, k, v = (self.to_q(x), *self.to_kv(context).chunk(2, dim = -1))
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/home/ychnh/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
CUBLAS initialization failed when running cuBLAS example
I kept getting the “CUBLAS initialization failed” error when trying to run the example from the cuBLAS website.
Read more >tensorflow running error with cublas - gpu
This was a nightmare to find a fix for - but the fix is somewhat simple. https://www.tensorflow.org/guide/using_gpu
Read more >Runtime error when translating using ctranslate2 - Support
this error happens when I'm translating a batch of sentences or just a single sentence, the model can be loaded without problem at...
Read more >CS 179: Lecture 10
Naming, and how we use cuBLAS to accelerate linear algebra computations with already optimized implementations of Basic Linear Algebra Subroutines (BLAS). ○ ...
Read more >cuBLAS Library
All cuBLAS library function calls return the error status cublasStatus_t. ... For example, on Linux, to compile a small application using cuBLAS, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@lucidrains I thought 1.8 was not with compatible with 10.2 (based on what I saw on the Install Pytorch menu at https://pytorch.org a couple of days ago) It seems like they are documented to be compatible now.
I am having the same issue with 1.8 It is working fine with 1.7.1 and 1.9 Maybe there is an issue with the pytorch library.
Ok. It is a non-issue. My version of Cuda 10.2 is not compatible with torch 1.8.0 I will close this.