"import numpy" leaks - reported by valgrind
See original GitHub issueReproducing code example:
PYTHONMALLOC=malloc valgrind --leak-check=full --show-leak-kinds=definite --suppressions=pybind11/tests/valgrind-python.supp --gen-suppressions=all python3.9-dbg -c "import numpy; print(numpy.__version__)"
Error message:
The output is fairly long, so I made a gist: https://gist.github.com/bstaletic/061ea8912ed5bc4e238363a120f70a1c The suppression file is in the gist as well.
NumPy/Python version information:
Python is 3.9.1, debug version, with assertions NumPy is 1.20.0, but these leaks were present on 1.19.x as well. I haven’t checked previous versions, because that’s when pybind11 incorporated valgrind in its CI.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:25 (21 by maintainers)
Top Results From Across the Web
Memory Leak with import_array() for numpy Python3.5 - Stack ...
When I use import_array(), Valgrind reports memory leak of 157528 bytes. Here is the small piece of code to replicate the problem on...
Read more >Advanced debugging tools — NumPy v1.24 Manual
Reports errors for each test individually. Narrows down memory leaks to individual tests (by default valgrind only checks for memory leaks after a...
Read more >Using valgrind with cython - Adriane Boyd
How to use valgrind to track down memory leaks in cython. This example walks through the process for a bug in spaCy reported...
Read more >pytest-valgrind - PyPI
Any valgrind error or memory leak occuring during the test will lead to the test being recorded as FAILURE. You will have to...
Read more >Valgrind Quick Start Guide
Use of -O2 and above is not recommended as Memcheck occasionally reports ... Memcheck will issue messages about memory errors and leaks that...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
OK, I had another look. The reason that I couldn’t reproduce it at first is that it is related to cython’s
CYTHON_COMPILING_IN_LIMITED_API
(I assume that the debug build disables it – pypy build would also), not compiling it with the that flag, will ensure that valgrind cannot consider it “definitely lost” because the module is never “cleaned up” and a pointer will be kept until the end. On the up-side: That proofs that the leak is really just one tuple, and it doesn’t really matter much.Now I find it a bit strange that is only one tuple is lost per file, and I do not really see where there might be a
DECREF
missing (unless we actually have an import error that gets eaten later). I am a bit wondering if Pythons tuple free-list could be involved, but I expect it should be cleared out correctly (and even if not, should probably not show up as lost memory in valgrind.)Thanks for confirming. We’ll stick to the current suppression files, then, and hopefully in some future, we try without and find a beautiful world without suppression files 😉