[Discussion] Proper instructions for spleeter-gpu
See original GitHub issueSo the past two days I’ve been trying many attempts to properly use Spleeter both for inference and training.
Both failing to some degree.
When I do inference, I have to install using the old 202003
label in my Anaconda environment, and it works fine for inference, but during training it crashes with
TypeError: x and y must have the same dtype, got tf.string != tf.int32
That’s on a Win10 machine with CUDA 10.1. CuDNN version is unknown to me, but I’d assume that spleeter separate ...
would’ve complained if if the correct CuDNN version would’ve been installed.
So I’m just going to assume that the right version
I then tried to install Spleeter from the GitHub repo on Ubuntu 18.04 and 20.04 eventually after many, MANY, trial and errors finding out that I needed to manually install CUDA 10.0, but why? I thought that CUDA 10.1 was supposed to be installed?!
Anyways.
When it comes to CuDNN, I’ve tried the following:
7.6.5
7.6.4
7.6.3
7.6.2
7.6.1
7.6.0
7.5.1
7.5.0
7.4.2
7.4.1
only to either have the CPU utilized or for it all to fail and end up with the error
Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
I have tried both the Anaconda env and pip. Docker failed completely.
I have also tried tensorflow-gpu
2.2, 1.15.3, 1.15.2, 1.15.0 and 1.14
What is going on here?
Actually let me ask the following:
What is YOUR setup and configuration to reliably run Spleeter both for inference and training?
I’m eager to get this to run reliably as I have a multi GPU ML rig and I really look forward to start training models as I have assembled a huge library of audio files and I have been successful
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (2 by maintainers)
Debian 9.9
Which Debian version?