change to work with less VRAM - "CUDA out of memory"See original GitHub issue
My graphics card is a NVIDIA GeForce GTX 1050 Ti, 4G memory.
When I run the
imagine command I receive this error:
File "/home/user1/.pyenv/versions/3.10.6/envs/imaginairy-3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 925, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 3.95 GiB total capacity; 2.82 GiB already allocated; 69.38 MiB free; 2.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I didn’t find a minimum requirements in project’s README, so I am assuming a 4G GPU isn’t enough to run this application. However I was able to run this stable diffusion project, so I am hoping some configuration could solve this for imaginAlry.
- Created a year ago
- Comments:5 (2 by maintainers)