question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CUDA out of memory while processing long tracks

See original GitHub issue

Hello,

First of all – thank you for this great product. It works flawlessly using CPU.

I’m trying to process material faster by using a GPU on an AWS EC2 instance. Unfortunately, it terminates with the following error:

$ demucs audio.mp3 -d cuda

Selected model is a bag of 4 models. You will see that many progress bars per track.
Separated tracks will be stored in /home/ec2-user/separated/mdx_extra_q
Separating track audio.mp3
100%|██████████████████████████████████████████████████████████████████████| 4356.0/4356.0 [01:35<00:00, 45.42seconds/s]

Traceback (most recent call last):
  File "/home/ec2-user/.local/bin/demucs", line 8, in <module>
    sys.exit(main())
  File "/home/ec2-user/.local/lib/python3.7/site-packages/demucs/separate.py", line 120, in main
    overlap=args.overlap, progress=True)[0]
  File "/home/ec2-user/.local/lib/python3.7/site-packages/demucs/apply.py", line 147, in apply_model
    estimates += out
RuntimeError: CUDA out of memory. Tried to allocate 5.71 GiB (GPU 0; 14.76 GiB total capacity; 9.12 GiB already allocated; 4.33 GiB free; 9.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Info about the running environment:

  • Python 3.7.10 and PyTorch 1.10.0
  • AWS EC2 instance type: g4dn.xlarge
  • Operating system and version:
Deep Learning AMI GPU PyTorch 1.10.0 (Amazon Linux 2) 20211115
  • GPU Hardware:
NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
Driver Version: 470.57.02    CUDA Version: 11.4
15109MiB memory

Do the models require a card with more than 16 GB RAM? If that’s not your experience, can you share your hardware/software environment, so that I can retry. Thank you.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:30 (18 by maintainers)

github_iconTop GitHub Comments

1reaction
mepc36commented, Dec 6, 2021

@adefossez, I’ve submitted PR #244 for the code which uses much less GPU VRAM at the expense of more regular RAM. Please review and if there’s anything to discuss, let’s comment in the PR.

This is AWESOME help @famzah, thanks for writing that PR! I would’ve helped but I don’t know Python super well haha…was reading your benchmarks and those tradeoffs b/w memory and processing time are definitely ones I’d sign up for

1reaction
famzahcommented, Dec 6, 2021

@adefossez, I’ve submitted PR #244 for the code which uses much less GPU VRAM at the expense of more regular RAM. Please review and if there’s anything to discuss, let’s comment in the PR.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Solving "CUDA out of memory" Error - Kaggle
1) Use this code to see memory usage (it requires internet to install package): · 2) Use this code to clear your memory:...
Read more >
cuda out of memory error when GPU0 memory is fully utilized
When GPU0 is fully utilized by another process, I get RuntimeError: cuda runtime error (2) : out of memory. It seems that torch.nn....
Read more >
"RuntimeError: CUDA error: out of memory" - Stack Overflow
The error occurs because you ran out of memory on your GPU. One way to solve it is to reduce the batch size...
Read more >
CUDA out-of-mem error - Chaos Help Center
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >
Resolving CUDA Being Out of Memory With Gradient ...
Implementing gradient accumulation and automatic mixed precision to solve CUDA out of memory issue when training big deep learning models ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found