UnboundLocalError on cuda/memory.pyx
See original GitHub issueHi!
I’m having an error when using a GPU from SpaCy, but originated in a cupy module. The relevant (and repeatable) traceback is as follows:
File "spacy/ml/parser_model.pyx", line 327, in spacy.ml.parser_model.step_forward.backprop_parser_step
File "spacy/ml/parser_model.pyx", line 277, in spacy.ml.parser_model.ParserStepModel.backprop_step
File "/home/user/.cache/pypoetry/virtualenvs/notebooks-noypvBLr-py3.8/lib/python3.8/site-packages/spacy/util.py", line 958, in get_async
array.set(numpy_array, stream=stream)
File "cupy/core/core.pyx", line 1622, in cupy.core.core.ndarray.set
File "cupy/core/core.pyx", line 1649, in cupy.core.core.ndarray.set
File "cupy/core/core.pyx", line 1651, in cupy.core.core.ndarray.set
File "cupy/cuda/memory.pyx", line 479, in cupy.cuda.memory.MemoryPointer.copy_from_host_async
UnboundLocalError: local variable 'ptr' referenced before assignment
I think the solution is pretty straightforward. In these lines
https://github.com/cupy/cupy/blob/2e95ed7e994b1db230501a69e8a323666164fb80/cupy/cuda/memory.pyx#L462-L480
the variable ptr
is effectively undefined when the stream
is not None
.
I’m not very used to the library, but I think that just by swapping L473 with L474 so that ptr
is defined regardless of the value of stream
, it should get sorted out.
Let me know if I can help in any way 😃
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (8 by maintainers)
Top Results From Across the Web
python - Cupy memory error on Google Colab with GPU
_malloc() cupy/cuda/memory.pyx in cupy.cuda.memory._try_malloc() OutOfMemoryError: Out of memory allocating 7,200,000,000 bytes (allocated ...
Read more >NVIDIA CUDA cannot work properly
Hello guys: I try to run a machine learning program with NVIDIA RTX A6000 graphical cards and Pytorch, but confronted with the following ......
Read more >Frequently Asked Questions — PyTorch 1.13 documentation
Frequently Asked Questions. My model reports “cuda runtime error(2): out of memory”. As the error message suggests, you have run out of memory...
Read more >cupy.cuda.memory.OutOfMemoryError problem - ner
Hi, I am trying to train with an NER model using en_core_web_trf base model and getting this error! (venv) C:\Users\Asli>python -m prodigy ...
Read more >chainer/develop-cupy - Gitter
_malloc() cupy/cuda/memory.pyx in cupy.cuda.memory._try_malloc() OutOfMemoryError: out of memory to allocate 5120000000 bytes (total 20480000000 bytes).
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Of course 😃 I’ll send it right away. Just arrived home, I’m currently on v9.0.0b3
Thanks for your fast response, @leofang I’ll try to build a minimal example from SpaCy’s code