question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cuda Out of Memory Error on Longer Files

See original GitHub issue

🐛 Bug

Hello,

I am trying to test out the torchfilters branch of this project. It works fine on shorter audio clips, but when the audio file is around 4 to 5 minutes in length, the program crashes with a CudaOutOfMemoryError.

To Reproduce

Steps to reproduce the behavior:

  1. Run test.py on a music file about 4 or 5 minutes in length.
Traceback (most recent call last):
  File "/home/user/unmix/test.py", line 74, in separate
    estimates, model_rate = separator(audio_torch, rate)
  File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/unmix/unmix/filtering.py", line 833, in forward
    for sample in range(nb_samples)], dim=0)
  File "/home/user/unmix/filtering.py", line 833, in <listcomp>
    for sample in range(nb_samples)], dim=0)
  File "/home/user/anaconda3/lib/python3.7/site-packages/torchaudio/functional.py", line 130, in istft
    onesided, signal_sizes=(n_fft,))  # size (channel, n_frames, n_fft)
RuntimeError: CUDA out of memory. Tried to allocate 454.00 MiB (GPU 0; 7.43 GiB total capacity; 6.02 GiB already allocated; 218.94 MiB free; 690.49 MiB cached)

Expected behavior

The program should finish execution on files of longer length as well. Is there a way to split the audio every one or two minutes, or use an audio loader in such a way that the entire song isn’t loaded into CUDA memory at once, so that way it doesn’t crash?

Thank you!

Environment

Please add some information about your environment

  • PyTorch Version (e.g., 1.2): 1.2
  • OS (e.g., Linux): Linux
  • torchaudio loader (y/n): Y
  • Python version: 3.7
  • CUDA/cuDNN version: 10.0/7.6
  • Any other relevant information:

Additional context

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
aadibajpaicommented, Sep 15, 2019

Does a unidirectional model reduce performance?

yes, for vocals it might be up to 0.5 dB SDR. For drums or bass its not that important, though.

I see, I’m mainly working with vocals and the master branch works even w/o cuda so no problem so far.

0reactions
faroitcommented, Sep 15, 2019

Does a unidirectional model reduce performance?

yes, for vocals it might be up to 0.5 dB SDR. For drums or bass its not that important, though.

Read more comments on GitHub >

github_iconTop Results From Across the Web

CUDA out of memory error, cannot reduce batch size
It is because of mini-batch of data does not fit onto GPU memory. Just decrease the batch size. When I set batch size...
Read more >
CUDA out of memory · Issue #101 · xinntao/Real-ESRGAN
I'm getting the following error message if I run your program: Testing 0 ... If you encounter CUDA out of memory, try to...
Read more >
Solving "CUDA out of memory" Error
1) Use this code to see memory usage (it requires internet to install package): · 2) Use this code to clear your memory:...
Read more >
CUDA out-of-mem error
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >
CUDA out of memory. - stabilityai/stable-diffusion
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.46 GiB already allocated; 0 bytes free; ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found