Excessive malloc time on Nvidia Ampere Arch
See original GitHub issueWe recently upgraded our servers from V100s to A100s and encountered a lengthy initialization cupy.cuda.runtime.malloc issue. Narrowing down the problem we were able to reproduce it with:
docker run -it --rm cupy/cupy:v8.0.0 python3 -c 'import time, cupy; start=time.time(); ptr=cupy.cuda.runtime.malloc(1); end=time.time(); print(end-start); cupy.cuda.runtime.free(ptr)'
This took 67 seconds to malloc a single byte. However, successive mallocs within the same process took fractional seconds:
docker run -it --rm cupy/cupy:v8.0.0 python3 -c 'import time, cupy; start=time.time(); ptr=cupy.cuda.runtime.malloc(1); end=time.time(); print(end-start); cupy.cuda.runtime.free(ptr); start=time.time(); ptr=cupy.cuda.runtime.malloc(1); end=time.time(); print(end-start); cupy.cuda.runtime.free(ptr)'
Prints 67 seconds followed by 0.15 seconds. We put the V100s back into the same server and ran the same docker command and it resulted in 0.15s always. We have also tried to malloc different amounts, but whether it is 1 byte or multiple gigs, the first initialization is always roughly 67 seconds. Most of our processes do not last that long so the initial delay is doubling or tripling our runtimes.
We have contacted Nvidia and they were able to reproduce the same lengthy cupy.cuda.runtime.malloc on Ampere A100 and RTX A6000, with no such delay on Volta and Turing (they tested cuPy on A100, RTX A6000, GV100, & T4).
We plan on additional testing when time permits, but were wondering if this is something already known by cuPy (a quick search for Ampere and/or malloc delay returned nothing), and if there are additional commands or configurations we can try to help debug this problem once we have the hardware setup again.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:12 (10 by maintainers)
Top GitHub Comments
Right, but it runs while the initialization takes much time.
Thanks for reporting! Currently, the CuPy image on Dockerhub uses CUDA Toolkit 10.2 (Dockerfile), which does not support A100. We had a similar issue even in the bear metal environment with CUDA 10.x + A100. We will consider upgrading the image to use CUDA 11 or later in v9 releases.