Memory allocation for buffers
See original GitHub issueWith the current implementation of buffers.py one can request a buffersize which doesn’t fit in the memory provided but because of numpys implementation of np.zeros()
the memory is not allocated before it is actually used. But because the buffer is meant to be filled completely (otherwise one could just use a smaller buffer) the computer will finally run out of memory and start to swap heavily. Because there are only smaller parts of the buffer that are accessed at once (minibatches) the system will just swap the necessary pages in and out of memory. At that moment the progress of the run is most likely lost and one has to start a new run with a smaller buffer.
I would recommend using np.ones
instead, as it will allocate the buffer at the beginning and fail if there is not enough memory provided by the system. The only issue is that there is no clear error description in the case where the system memory is exceeded but python gets simply killed by the OS with a SIGKILL. Maybe one could catch that command?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:25 (11 by maintainers)
Top GitHub Comments
oh too slow
This should work: