question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CUDA out of memory with super-resolution plugin

See original GitHub issue

System: Kubuntu linux - GTX 960 4GB - Gimp 2.10.18

If I apply Super-resolution plugin to any image larger than about 600x600 px it quickly fills up the 4GB of vRAM and I get the following error:

An error occurred running python-fu-super-resolution
RuntimeError: CUDA out of memory. Tried to allocate 66.00 MiB (GPU 0; 3.95 GiB total capacity; 3.10 GiB already allocated; 53.38 MiB free; 3.12 GiB reserved in total by PyTorch)
Traceback (most recent call last):
  File "/usr/lib/gimp/2.0/python/gimpfu.py", line 740, in response
    dialog.res = run_script(params)
  File "/usr/lib/gimp/2.0/python/gimpfu.py", line 361, in run_script
    return apply(function, params)
  File "/home/yafu/GIMP-ML/gimp-plugins/super_resolution.py", line 100, in super_resolution
    cpy = getnewimg(imgmat,scale)
  File "/home/yafu/GIMP-ML/gimp-plugins/super_resolution.py", line 56, in getnewimg
    HR_4x = model(im_input)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/yafu/GIMP-ML/gimp-plugins/pytorch-SRResNet/srresnet.py", line 61, in forward
    out = self.residual(out)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/yafu/GIMP-ML/gimp-plugins/pytorch-SRResNet/srresnet.py", line 18, in forward
    output = self.in2(self.conv2(output))
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/modules/instancenorm.py", line 49, in forward
    self.training or not self.track_running_stats, self.momentum, self.eps)
  File "/home/yafu/GIMP-ML/gimp-plugins/gimpenv/lib/python2.7/site-packages/torch/nn/functional.py", line 1685, in instance_norm
    use_input_stats, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 66.00 MiB (GPU 0; 3.95 GiB total capacity; 3.10 GiB already allocated; 33.44 MiB free; 3.12 GiB reserved in total by PyTorch)

As a request derived from this problem, as vRAM capacity can be very varied depending on the users, it would be great if the user could choose between CPU or GPU in the plugins that autodetect CUDA to work. Thanks!

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
kritiksomancommented, Nov 1, 2020

I tried for 1166X672 image on i5 processor, 4gb RAM, macOS. It took 5 minutes to produce 4664X2688 image. Will try to optimise it further.

0reactions
datalot-369commented, Oct 31, 2020
  • Wow, very tricky.
  • Can’t run Use as Filter without Force CPU.
  • With a 364 x 264 image and x4 parameter, it takes my 4GB of RAM but I suspect it has not taken my swap because the program was soft, maybe just background programs did it.
  • After a few seconds, I noted changes in the progress bar and finally a new document with the upscaled image raised.
  • I’m not sure I could experiment with higher images with this setup (See… 720p), but at the very least it works and is reproducible.

Not the best I have seen, for sure:

original upscaled


Congratulations to the GIMP ML team!

Read more comments on GitHub >

github_iconTop Results From Across the Web

stabilityai/stable-diffusion-x4-upscaler · CUDA out of memory
The example provided throws 'CUDA out of memory' error if image for upscale is more then 128x128 (256x 256 for example).
Read more >
Maximizing Unified Memory Performance in CUDA
In this post I'll break it down step by step and show you what you can do to optimize your code to get...
Read more >
AMD releases FidelityFX Super Resolution plugin for Unreal ...
AMD releases FidelityFX Super Resolution plugin for Unreal Engine 4 ... RTX A4500 workstation GPU with 7168 CUDA cores and 20GB GDDR6 memory....
Read more >
Vulkan® Memory Allocator - GPUOpen
VMA is our single-header, MIT-licensed, C++ library for easily and efficiently managing memory allocation for your Vulkan® games and applications.
Read more >
CUDA out of memory while using PyTorch - Stack Overflow
Trying to reproduce the Super Resolution GAN from this repository — Super Resolution — using Google Colab, but each time when I execute...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found