question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CuPy fails to autodetect CUDA root directory after successful ROCm installation.

See original GitHub issue

I installed CuPy using the ROCm install instructions

$ export HCC_AMDGPU_TARGET=gfx1010,gfx1012 && export __HIP_PLATFORM_HCC__ && export CUPY_INSTALL_USE_HIP=1 && export ROCM_HOME=/opt/rocm

$ pip install --no-cache-dir cupy

Installation completed successfully, I can create a cupy ndarray, but attempting any operation on it results in the following error:

Python 3.7.6 (default, Jan  8 2020, 19:59:22) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cupy as cp
>>> x = cp.array([1, 2, 3.])
>>> 2 * x
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "cupy/core/core.pyx", line 980, in cupy.core.core.ndarray.__mul__
  File "cupy/core/_kernel.pyx", line 951, in cupy.core._kernel.ufunc.__call__
  File "cupy/core/_kernel.pyx", line 974, in cupy.core._kernel.ufunc._get_ufunc_kernel
  File "cupy/core/_kernel.pyx", line 714, in cupy.core._kernel._get_ufunc_kernel
  File "cupy/core/_kernel.pyx", line 61, in cupy.core._kernel._get_simple_elementwise_kernel
  File "cupy/core/carray.pxi", line 179, in cupy.core.core.compile_with_cache
RuntimeError: Failed to auto-detect CUDA root directory. Please specify `CUDA_PATH` environment variable if you are using CUDA v9.0, v9.1 or versions not yet supported by CuPy.

System is CuPy version 7.3.0, Linux Mint 19.3 on amd64, ROCm version 3.1.0 .

Thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:7 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
emcastillocommented, Apr 2, 2020

I will fix this by not looking CUDA_PATH when an AMD build is done

1reaction
mihirparadkarcommented, Apr 1, 2020

I got the specific example working by setting CUDA_PATH=/opt/rocm. I’ll send a PR adding this step to the docs. I neglected to mention that my graphics card is RX 5700 XT (gfx1010)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Environment variables — CuPy 11.4.0 documentation
These environment variables are used during installation (building CuPy from source). CUTENSOR_PATH#. Path to the cuTENSOR root directory that contains lib and ...
Read more >
Failed to import cupy - Stack Overflow
If you installed CuPy via wheels (cupy-cudaXXX or cupy-rocm-X-X), make sure that the package matches with the version of CUDA or ROCm installed....
Read more >
Installation guide - GROMACS 2024-dev-20221213-e7e04a3 ...
With GMX_MPI=ON , GROMACS attempts to automatically detect CUDA support in the underlying MPI library at compile time, and enables direct GPU communication ......
Read more >
Package List — Spack 0.20.0.dev0 documentation
This is a list of things you can install using Spack. ... py-onnx-runtime, py-pybind11, py-pytest, root, sycl, vecmem; Link Dependencies: cuda, acts-dd4hep, ...
Read more >
conda cudatoolkit path
I installed it with the following command: conda install pytorch ... CuPy fails to autodetect CUDA root directory after successful ROCm installation.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found