question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Bug] Spleeter error running in Python (Anaconda/Windows/CUDA/Visual/Tensorflow/GPU accelerated)

See original GitHub issue

Description

I tried to run spleeter from python using the sourcecode below. When running spleeter separate -i D:\SAE\Producties\PsyTrain\Kadoc.flac -o D:\SAE\ProductiesPsyTrain\5stems\ -p spleeter:5stems I get error reports and no stems are separated. Note: Spleeter used to work just fine on this install.

Step to reproduce

  1. Installed using pip install spleeter Installed Visual/CUDA/Tensorflow/GPU

  2. Run as .spleeter separate -i D:\SAE\Producties\PsyTrain\Kadoc.flac -o D:\SAE\ProductiesPsyTrain\5stems\ -p spleeter:5stems

  3. Got `tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node conv2d_transpose_28/conv2d_transpose}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

      [[strided_slice_48/_757]]
    

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node conv2d_transpose_28/conv2d_transpose}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations. 0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File “C:\Users\sande\Anaconda3\Scripts\spleeter-script.py”, line 9, in <module> sys.exit(entrypoint()) File “C:\Users\sande\Anaconda3\lib\site-packages\spleeter_main_.py”, line 54, in entrypoint main(sys.argv) File “C:\Users\sande\Anaconda3\lib\site-packages\spleeter_main_.py”, line 46, in main entrypoint(arguments, params) File “C:\Users\sande\Anaconda3\lib\site-packages\spleeter\commands\separate.py”, line 43, in entrypoint synchronous=False File “C:\Users\sande\Anaconda3\lib\site-packages\spleeter\separator.py”, line 123, in separate_to_file sources = self.separate(waveform) File “C:\Users\sande\Anaconda3\lib\site-packages\spleeter\separator.py”, line 89, in separate ‘audio_id’: ‘’}) File “C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\contrib\predictor\predictor.py”, line 77, in call return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict) File “C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 950, in run run_metadata_ptr) File “C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 1173, in _run feed_dict_tensor, options, run_metadata) File “C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 1350, in _do_run run_metadata) File “C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 1370, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node conv2d_transpose_28/conv2d_transpose (defined at \lib\site-packages\spleeter\utils\estimator.py:71) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[strided_slice_48/_757]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node conv2d_transpose_28/conv2d_transpose (defined at \lib\site-packages\spleeter\utils\estimator.py:71) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations. 0 derived errors ignored.` error

Output

Traceback (most recent call last):
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
    return fn(*args)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node conv2d_transpose_28/conv2d_transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[strided_slice_48/_757]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node conv2d_transpose_28/conv2d_transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\sande\Anaconda3\Scripts\spleeter-script.py", line 9, in <module>
    sys.exit(entrypoint())
  File "C:\Users\sande\Anaconda3\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
    main(sys.argv)
  File "C:\Users\sande\Anaconda3\lib\site-packages\spleeter\__main__.py", line 46, in main
    entrypoint(arguments, params)
  File "C:\Users\sande\Anaconda3\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
    synchronous=False
  File "C:\Users\sande\Anaconda3\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
    sources = self.separate(waveform)
  File "C:\Users\sande\Anaconda3\lib\site-packages\spleeter\separator.py", line 89, in separate
    'audio_id': ''})
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
    return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
    run_metadata_ptr)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run
    run_metadata)
  File "C:\Users\sande\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node conv2d_transpose_28/conv2d_transpose (defined at \lib\site-packages\spleeter\utils\estimator.py:71) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[strided_slice_48/_757]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[32,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node conv2d_transpose_28/conv2d_transpose (defined at \lib\site-packages\spleeter\utils\estimator.py:71) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Original stack trace for 'conv2d_transpose_28/conv2d_transpose':
  File "\Scripts\spleeter-script.py", line 9, in <module>
    sys.exit(entrypoint())
  File "\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
    main(sys.argv)
  File "\lib\site-packages\spleeter\__main__.py", line 46, in main
    entrypoint(arguments, params)
  File "\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
    synchronous=False
  File "\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
    sources = self.separate(waveform)
  File "\lib\site-packages\spleeter\separator.py", line 86, in separate
    predictor = self._get_predictor()
  File "\lib\site-packages\spleeter\separator.py", line 58, in _get_predictor
    self._predictor = to_predictor(estimator)
  File "\lib\site-packages\spleeter\utils\estimator.py", line 71, in to_predictor
    return predictor.from_saved_model(latest)
  File "\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
    config=config)
  File "\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
    loader.load(self._session, tags.split(','), export_dir)
  File "\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 269, in load
    return loader.load(sess, tags, import_scope, **saver_kwargs)
  File "\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 422, in load
    **saver_kwargs)
  File "\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 352, in load_graph
    meta_graph_def, import_scope=import_scope, **saver_kwargs)
  File "\lib\site-packages\tensorflow\python\training\saver.py", line 1473, in _import_meta_graph_with_return_elements
    **kwargs))
  File "\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 857, in import_scoped_meta_graph_with_return_elements
    return_elements=return_elements)
  File "\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "\lib\site-packages\tensorflow\python\framework\importer.py", line 443, in import_graph_def
    _ProcessNewOps(graph)
  File "\lib\site-packages\tensorflow\python\framework\importer.py", line 236, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in <listcomp>
    for c_op in c_api_util.new_tf_operations(self)
  File "\lib\site-packages\tensorflow\python\framework\ops.py", line 3641, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

Environment

OS Windows 10
Installation type Conda / pip /
RAM available 16
Hardware spec NVIDIA GeForce GTX 1060 / i7-8750H / etc …

Additional context

I performed a clean install from GitHub before and also installed support for hardware accelleration via GPU for Spleeter, which also has worked fine before today.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
Epemastercommented, Jan 18, 2020

@mmoussallam I’ve had so many errors that I just brute-force the different fixes.

  1. Check CUDA version
  2. Check FFMPEG installation
  3. python -m
  4. Split audio into pieces

1

1reaction
mmoussallamcommented, Jan 17, 2020

Hi @Epemaster

An OOM error like yours indicate that the file you’re processing is too large to fit into your memory. You may want to split your file into smaller pieces of, say 1min each and process them separately.

@aidv the python -m spleeter trick is only a solution for the unknown spleeter command issue as explained in the FAQ. it won’t solve other types of problems

Read more comments on GitHub >

github_iconTop Results From Across the Web

Possible bug in `re` module in Python 3.11?
I've tried using the newly released Python 3.11 interpreter in some of my projects and one of them failed to start with “RuntimeError:...
Read more >
[Bug] Spleeter error running in Python (Anaconda/Windows ...
I tried to run spleeter from python using the sourcecode below. When running spleeter separate -i D:\SAE\Producties\PsyTrain\Kadoc.flac -o ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found