question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

change to work with less VRAM - "CUDA out of memory"

See original GitHub issue

My graphics card is a NVIDIA GeForce GTX 1050 Ti, 4G memory.

When I run the imagine command I receive this error:

  File "/home/user1/.pyenv/versions/3.10.6/envs/imaginairy-3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 925, in convert

    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 3.95 GiB total capacity; 2.82 GiB already allocated; 69.38 MiB free; 2.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  

See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I didn’t find a minimum requirements in project’s README, so I am assuming a 4G GPU isn’t enough to run this application. However I was able to run this stable diffusion project, so I am hoping some configuration could solve this for imaginAlry.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
thoastbrotcommented, Sep 25, 2022

6GB 1060 runs fine if I close apps like chrome beforehand. There’s no room for parallel GPU usage with such cards.

1reaction
brycedrennancommented, Sep 21, 2022

Thanks. I’ll see what can be done to optimise this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Stable Diffusion Runtime Error: How To Fix CUDA Out Of ...
Just change the -W 256 -H 256 part in the command. Try this fork as it requires a lot less VRAM according to...
Read more >
Solving "CUDA out of memory" Error
Hello all, for me the cuda_empty_cache() alone did not work. What did work was: 1) del learners/dataloaders anything that used up the GPU...
Read more >
CUDA out-of-mem error
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >
Why do I get CUDA out of memory when running PyTorch ...
PyTorch allocates memory when running this command: torch.rand(20000, 20000).cuda() #allocated 1.5GB of VRAM. What is the solution to this?
Read more >
CUDA out of memory. - stabilityai/stable-diffusion
Thank you AMIR-01 that worked! Was searching for a long time. Could you explain why this helps? dhavalsdave Sep 16.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found