change to work with less VRAM - "CUDA out of memory"
See original GitHub issueMy graphics card is a NVIDIA GeForce GTX 1050 Ti, 4G memory.
When I run the imagine
command I receive this error:
File "/home/user1/.pyenv/versions/3.10.6/envs/imaginairy-3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 3.95 GiB total capacity; 2.82 GiB already allocated; 69.38 MiB free; 2.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I didn’t find a minimum requirements in project’s README, so I am assuming a 4G GPU isn’t enough to run this application. However I was able to run this stable diffusion project, so I am hoping some configuration could solve this for imaginAlry.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Stable Diffusion Runtime Error: How To Fix CUDA Out Of ...
Just change the -W 256 -H 256 part in the command. Try this fork as it requires a lot less VRAM according to...
Read more >Solving "CUDA out of memory" Error
Hello all, for me the cuda_empty_cache() alone did not work. What did work was: 1) del learners/dataloaders anything that used up the GPU...
Read more >CUDA out-of-mem error
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >Why do I get CUDA out of memory when running PyTorch ...
PyTorch allocates memory when running this command: torch.rand(20000, 20000).cuda() #allocated 1.5GB of VRAM. What is the solution to this?
Read more >CUDA out of memory. - stabilityai/stable-diffusion
Thank you AMIR-01 that worked! Was searching for a long time. Could you explain why this helps? dhavalsdave Sep 16.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
6GB 1060 runs fine if I close apps like chrome beforehand. There’s no room for parallel GPU usage with such cards.
Thanks. I’ll see what can be done to optimise this.