CI: CUDA Python test is broken
See original GitHub issueCUDA Python CI is broken. It seems CUDA 11.5 & CUDA Python 11.7.1 & texture memory issue.
https://ci.preferred.jp/cupy.linux.cuda11x-cuda-python/105617/
01:27:04.155812 STDOUT 1589] FAILED cupy_tests/core_tests/test_userkernel.py::TestElementwiseKernelTexture_param_0_{dimensions=(64, 0, 0)}::test_texture_input
01:27:04.155819 STDOUT 1589] FAILED cupy_tests/core_tests/test_userkernel.py::TestElementwiseKernelTexture_param_1_{dimensions=(64, 32, 0)}::test_texture_input
01:27:04.155823 STDOUT 1589] FAILED cupy_tests/core_tests/test_userkernel.py::TestElementwiseKernelTexture_param_2_{dimensions=(64, 32, 19)}::test_texture_input
01:27:04.155828 STDOUT 1589] FAILED cupy_tests/cuda_tests/test_texture.py::TestTexture::test_fetch_float_texture[_param_0_{dimensions=(64, 0, 0), mem_type='CUDAarray', target='object'}]
01:27:04.155834 STDOUT 1589] FAILED cupy_tests/cuda_tests/test_texture.py::TestTexture::test_fetch_float_texture[_param_6_{dimensions=(64, 32, 0), mem_type='CUDAarray', target='object'}]
01:27:04.155838 STDOUT 1589] FAILED cupy_tests/cuda_tests/test_texture.py::TestTexture::test_fetch_float_texture[_param_12_{dimensions=(64, 32, 19), mem_type='CUDAarray', target='object'}]
01:27:04.155844 STDOUT 1589] FAILED cupy_tests/cuda_tests/test_texture.py::TestTextureVectorType::test_fetch_float4_texture[_param_0_{target='object'}]
01:27:04.155886 STDOUT 1589] = 7 failed, 98674 passed, 3634 skipped, 1043 deselected, 88 xfailed, 2569 warnings in 3444.73s (0:57:24) =
Issue Analytics
- State:
- Created a year ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Tensorflow doesn't seem to see my gpu - Stack Overflow
I came across this same issue in jupyter notebooks. This could be an easy fix. $ pip uninstall tensorflow $ pip install tensorflow-gpu....
Read more >Multiprocessing best practices — PyTorch 1.13 documentation
torch.multiprocessing is a drop in replacement for Python's multiprocessing module. ... or forkserver start method are required to use CUDA in subprocesses.
Read more >Release Notes — Numba 0.56.4+0.g288a38bbd.dirty-py3.7 ...
Version 0.56.4 (3 November, 2022) . This is a bugfix release to fix a regression in the CUDA target in relation to...
Read more >Release Notes — Numba 0.50.1 documentation - PyData |
Version 0.50.1 (Jun 24, 2020)¶. This is a bugfix release for 0.50.0, it fixes a critical bug in error reporting and a number...
Read more >Feature Overview — NVIDIA DCGM Documentation latest ...
These BLANK values can be tested with DCGM_FP64_IS_BLANK(value) in the C or Python bindings. CUDA Test Generator (dcgmproftester)¶.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The CUDA python team’s investigation has revealed that this is not directly due to CUDA python, but is essentially due to backward compatibility issues with the texture related ABIs in CUDA 11.5/11.6. This backward compatibility issue has been resolved in CUDA 11.7, so it would be helpful if you could use CUDA 11.7 to test CUDA python.
I built CuPy (v11) with the following combinations and tested
cupy_tests/core_tests/test_userkernel.py
andcupy_tests/cuda_tests/test_texture.py
. The error occurred only with the combination of CUDA 11.5.0 and CUDA-python 11.7.1.For CUDA 11.5, is it possible to use, for example, CUDA-python 11.7.0 as a workaround?