RuntimeError: CUDA out of memory.
See original GitHub issueI get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 5.80 GiB total capacity; 4.14 GiB already allocated; 154.56 MiB free; 4.24 GiB reserved in total by PyTorch)
Is there a way to allocate more memory? I do not get why 4.14Gb are already allocated.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:18
- Comments:21 (2 by maintainers)
Top Results From Across the Web
"RuntimeError: CUDA error: out of memory" - Stack Overflow
The error occurs because you ran out of memory on your GPU. One way to solve it is to reduce the batch size...
Read more >Solving the “RuntimeError: CUDA Out of memory” error
Solving the “RuntimeError: CUDA Out of memory” error · Reduce the `batch_size` · Lower the Precision · Do what the error says ·...
Read more >Solving "CUDA out of memory" Error
RuntimeError : CUDA out of memory. Tried to allocate 978.00 MiB (GPU ... 4) Here is the full code for releasing CUDA memory:...
Read more >Resolving CUDA Being Out of Memory With Gradient ...
RuntimeError: CUDA error: out of memory. There's nothing to explain actually, ... This solution has been a de facto for solving out of...
Read more >Stable Diffusion Runtime Error: How To Fix CUDA Out Of ...
How To Fix Runtime Error: CUDA Out Of Memory In Stable Diffusion · Restarting the PC worked for some people. · Reduce the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I just ran into the same issue, for me it was - RuntimeError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 10.74 GiB total capacity; 7.82 GiB already allocated; 195.75 MiB free; 9.00 GiB reserved in total by PyTorch)
I was able to fix with the following steps:
I would still love to be able to process full resolution pictures if anyone has a solution.
A simply walkaround is to reduce your image size > 640x480. Evenmore if you are trying to use ‘–with-scratches’ this will increase dramatically use of GPU memory. But it is a pitty to have to downscale your photo. With DeOldify it never happens.