CUDA error running tests
See original GitHub issueInstalled successfully on 3090 card, cuda 11.7 w/ Driver Version 515.65.01
$ pytest -q -s tests/test_flash_attn.py >> test.txt
CUDA error (csrc/flash_attn/src/fmha_fprop_fp16_kernel.sm80.cu:74): invalid argument
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
Issue Analytics
- State:
- Created a year ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
[BUG] run unit test shows 'Cannot re-initialize CUDA in forked ...
Describe the bug. When run unit test test_zero2_reduce_scatter_off, get error message: "Cannot re-initialize CUDA in forked subprocess".
Read more >unit test failures due to CUDA error in ... - GitLab
Yes, it looks like, for every test, it is setting peer access between all GPUs in gpuIdsToUse, which contains even those not actually...
Read more >Failed in test CUDA v3.8.0 - GPU - Julia Discourse
I have add CUDA. When I test CUDA, it failed. Testing Running tests… ┌ Info: System information: │ CUDA toolkit 11.6, artifact installation...
Read more >Error running bandwidthtest on Windows 10. CUDA version=9.
Whenever I run bandwidthTest.exe on powershell or cmd on windows, it gives me this error:- [CUDA Bandwidth Test] - Starting… Running on…
Read more >why "RuntimeError CUDA out of memory" in testing?
I have used nvidia-smi to make sure nothing else is running on the GPU. Following is the actual error message: RuntimeError: CUDA out...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
And backward for headdim=128 is only supported on A100 (which has more shared memory than other GPUs).
Yeah that could be right, we compare against the reference implementation which could take a lot of memory. Maybe we should skip these steps for non-A100 GPUs.