question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: CUDA out of memory.

See original GitHub issue

I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 5.80 GiB total capacity; 4.14 GiB already allocated; 154.56 MiB free; 4.24 GiB reserved in total by PyTorch)

Is there a way to allocate more memory? I do not get why 4.14Gb are already allocated.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:18
  • Comments:21 (2 by maintainers)

github_iconTop GitHub Comments

6reactions
mebelzcommented, Oct 2, 2020

I just ran into the same issue, for me it was - RuntimeError: CUDA out of memory. Tried to allocate 734.00 MiB (GPU 0; 10.74 GiB total capacity; 7.82 GiB already allocated; 195.75 MiB free; 9.00 GiB reserved in total by PyTorch)

I was able to fix with the following steps:

  1. In run.py I changed test_mode to Scale / Crop to confirm this actually fixes the issue -> the input picture was too large.
  2. I rewrote data_transforms in test.py to scale not to 256px max dimension, but rather to 1.3Mpx total area (seems to be the max capacity of my card).
  3. The for-loop in the end of test.py seems to be leaking GPU memory (1st iteration worked while 2nd, 3rd… didn’t). I extracted the loop contents to a new function to let python garbage-collect temporary variables.

I would still love to be able to process full resolution pictures if anyone has a solution.

4reactions
akenatebcommented, Nov 11, 2020

A simply walkaround is to reduce your image size > 640x480. Evenmore if you are trying to use ‘–with-scratches’ this will increase dramatically use of GPU memory. But it is a pitty to have to downscale your photo. With DeOldify it never happens.

Read more comments on GitHub >

github_iconTop Results From Across the Web

"RuntimeError: CUDA error: out of memory" - Stack Overflow
The error occurs because you ran out of memory on your GPU. One way to solve it is to reduce the batch size...
Read more >
Solving the “RuntimeError: CUDA Out of memory” error
Solving the “RuntimeError: CUDA Out of memory” error · Reduce the `batch_size` · Lower the Precision · Do what the error says ·...
Read more >
Solving "CUDA out of memory" Error
RuntimeError : CUDA out of memory. Tried to allocate 978.00 MiB (GPU ... 4) Here is the full code for releasing CUDA memory:...
Read more >
Resolving CUDA Being Out of Memory With Gradient ...
RuntimeError: CUDA error: out of memory. There's nothing to explain actually, ... This solution has been a de facto for solving out of...
Read more >
Stable Diffusion Runtime Error: How To Fix CUDA Out Of ...
How To Fix Runtime Error: CUDA Out Of Memory In Stable Diffusion · Restarting the PC worked for some people. · Reduce the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found