question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ndimage filters not compiling properly on docker install

See original GitHub issue

I get a compilation error when using ndimage.maximum_filter on a docker Ubuntu18.04 install. If I installed on my Ubuntu20.04 machine within a pipenv (also with cuda 10.1 and cupy 8.2), I do not get this problem.

  • Conditions (you can just paste the output of python -c 'import cupy; cupy.show_config()')

Run on the docker image created below

OS                           : Linux-5.4.0-56-generic-x86_64-with-glibc2.27
CuPy Version                 : 8.2.0
NumPy Version                : 1.19.4
SciPy Version                : None
Cython Build Version         : 0.29.21
CUDA Root                    : /usr/local/cuda
CUDA Build Version           : 10010
CUDA Driver Version          : 11010
CUDA Runtime Version         : 10010
cuBLAS Version               : 10201
cuFFT Version                : 10101
cuRAND Version               : 10101
cuSOLVER Version             : (10, 2, 0)
cuSPARSE Version             : 10300
NVRTC Version                : (10, 1)
Thrust Version               : 100906
CUB Build Version            : 100800
cuDNN Build Version          : None
cuDNN Version                : None
NCCL Build Version           : 2708
NCCL Runtime Version         : 2708
cuTENSOR Version             : None
Device 0 Name                : TITAN RTX
Device 0 Compute Capability  : 75
Device 1 Name                : GeForce RTX 2070 SUPER
Device 1 Compute Capability  : 75
  • Code to reproduce

Dockerfile:

FROM nvidia/cuda:10.1-runtime-ubuntu18.04

RUN apt-get update && apt-get install --yes python3.8 python3.8-distutils wget libgomp1
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3.8 get-pip.py
RUN python3.8 -m pip install cupy-cuda101==8.2.0

CMD python3.8 -c 'import cupy as cp; from cupyx.scipy import ndimage; ndimage.maximum_filter(cp.zeros(10), footprint=[1,1])'

Build and run with nvidia-docker

docker build -t cupy-ndfilters-issue .
docker run --gpus all -it --rm cupy-ndfilters-issue:latest
  • Error messages, stack traces, or logs
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 516, in compile
    nvrtc.compileProgram(self.ptr, options)
  File "cupy_backends/cuda/libs/nvrtc.pyx", line 108, in cupy_backends.cuda.libs.nvrtc.compileProgram
  File "cupy_backends/cuda/libs/nvrtc.pyx", line 120, in cupy_backends.cuda.libs.nvrtc.compileProgram
  File "cupy_backends/cuda/libs/nvrtc.pyx", line 58, in cupy_backends.cuda.libs.nvrtc.check_status
cupy_backends.cuda.libs.nvrtc.NVRTCError: NVRTC_ERROR_COMPILATION (6)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/filters.py", line 750, in maximum_filter
    return _min_or_max_filter(input, size, footprint, None, output, mode,
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/filters.py", line 767, in _min_or_max_filter
    return _filters_core._run_1d_filters(
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/_filters_core.py", line 97, in _run_1d_filters
    fltr(input, arg, axis, output, mode, cval, origin)
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/filters.py", line 838, in maximum_filter1d
    return _min_or_max_1d(input, size, axis, output, mode, cval, origin, 'max')
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/filters.py", line 851, in _min_or_max_1d
    return _filters_core._call_kernel(kernel, input, None, output,
  File "/usr/local/lib/python3.8/dist-packages/cupyx/scipy/ndimage/_filters_core.py", line 139, in _call_kernel
    kernel(*args)
  File "cupy/core/_kernel.pyx", line 821, in cupy.core._kernel.ElementwiseKernel.__call__
  File "cupy/core/_kernel.pyx", line 846, in cupy.core._kernel.ElementwiseKernel._get_elementwise_kernel
  File "cupy/_util.pyx", line 53, in cupy._util.memoize.decorator.ret
  File "cupy/core/_kernel.pyx", line 639, in cupy.core._kernel._get_elementwise_kernel
  File "cupy/core/_kernel.pyx", line 37, in cupy.core._kernel._get_simple_elementwise_kernel
  File "cupy/core/_kernel.pyx", line 60, in cupy.core._kernel._get_simple_elementwise_kernel
  File "cupy/core/core.pyx", line 1862, in cupy.core.core.compile_with_cache
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 335, in compile_with_cache
    return _compile_with_cache_cuda(
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 402, in _compile_with_cache_cuda
    ptx, mapping = compile_using_nvrtc(
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 173, in compile_using_nvrtc
    return _compile(source, options, cu_path,
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 157, in _compile
    ptx, mapping = prog.compile(options, log_stream)
  File "/usr/local/lib/python3.8/dist-packages/cupy/cuda/compiler.py", line 527, in compile
    raise CompileException(log, self.src, self.name, options,
cupy.cuda.compiler.CompileException: /usr/local/lib/python3.8/dist-packages/cupy/core/include/cupy/complex/complex.h(94): warning: __host__ annotation is ignored on a function("complex") that is explicitly defaulted on its first declaration

/usr/local/lib/python3.8/dist-packages/cupy/core/include/cupy/complex/complex.h(94): warning: __device__ annotation is ignored on a function("complex") that is explicitly defaulted on its first declaration

/usr/local/lib/python3.8/dist-packages/cupy/core/include/cupy/complex/complex.h(101): warning: __host__ annotation is ignored on a function("complex") that is explicitly defaulted on its first declaration

/usr/local/lib/python3.8/dist-packages/cupy/core/include/cupy/complex/complex.h(101): warning: __device__ annotation is ignored on a function("complex") that is explicitly defaulted on its first declaration

/tmp/tmpgy86vl54/f90c07044ccc3d95942ac864d5666e8e_2.cubin.cu(8): catastrophic error: cannot open source file "math_constants.h"

1 catastrophic error detected in the compilation of "/tmp/tmpgy86vl54/f90c07044ccc3d95942ac864d5666e8e_2.cubin.cu".
Compilation terminated.

Add @grlee77 for visibility, as https://github.com/mritools/cupyimg also raises the error.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:13 (10 by maintainers)

github_iconTop GitHub Comments

2reactions
leofangcommented, Jan 19, 2022

FYI: CUDA headers are now open sourced https://gitlab.com/nvidia/headers/cuda (cc: @kmaehashi)

1reaction
kkraus14commented, Jan 20, 2021

Hi @kkraus14 @jakirkham Not sure if PFN folks have raised this or not, do you think it is possible for CuPy to bundle some of the CUDA headers despite the EULA limitation? It’d be great to have math_constants.h and cuRAND device headers to be bundled so that they can be compiled with NVRTC. This is is one of the bug reports due to the lack of those headers.

In general, I think many headers should be allowed for redistribution due to the need of JIT compilation. I wonder how other libraries resolve/bypass this limitation?

The only approval we have for conda-forge is for the redistributable pieces where this would definitely not be permitted. I would suggest emailing nvidia-compute-license-questions@nvidia.com and explaining the situation to see what could be done.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to avoid reinstalling packages when building Docker ...
Try to build a Dockerfile which looks something like this: FROM my/base WORKDIR /srv ADD ./requirements.txt /srv/requirements.txt RUN pip install -r ...
Read more >
Containers For Deep Learning Frameworks User Guide
This guide provides a detailed overview about containers and step-by-step instructions for pulling and running a container and customizing and extending ...
Read more >
Docker development best practices - Docker Documentation
Docker development best practices. The following development patterns have proven to be helpful for people building applications with Docker.
Read more >
Creating Smaller Docker Images - Ian Lewis
The Dockerfile is pretty straightforward. We download the redis source code compile and install it. Then at the end we delete all the...
Read more >
Creating a Docker Image for your Application - Stereolabs
Build the image with docker build command. ... Use --no-install-recommends when installing packages with apt-get install to disable installation of optional ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found