question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

DeepSpeech.PyTorch stops working after installing Torch to use also DeepSpeech.Torch

See original GitHub issue

Dear friends,

My DeepSpeech.PyTorch stopped working after installing Torch to use also DeepSpeech.Torch. See the logs bellow. It is very similar with an another issue of the repo and they said we should use another gcc, but I am not sure exactly what is the REAL problem.

If a move the torch installation directory, DeepSpeech.PyTorch works again! If a move the torch installation directory back, DeepSpeech.PyTorch fails!

> dlm@vm001nc6:~/code/deepspeech.pytorch$
> dlm@vm001nc6:~/code/deepspeech.pytorch$
> dlm@vm001nc6:~/code/deepspeech.pytorch$ python train.py --train_manifest data/train_manifest.csv --val_manifest data/val_manifest.csv
> Traceback (most recent call last):
>   File "train.py", line 9, in <module>
>     from warpctc_pytorch import CTCLoss
>   File "/home/dlm/anaconda3/lib/python3.6/site-packages/warpctc_pytorch/__init__.py", line 7, in <module>
>     from ._warp_ctc import lib as _lib, ffi as _ffi
> ImportError: /home/dlm/anaconda3/lib/python3.6/site-packages/torch/lib/../../../../libgomp.so.1: version `GOMP_4.0' not found (required by /home/dlm/torch/install/lib/libwarpctc.so)
> dlm@vm001nc6:~/code/deepspeech.pytorch$

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:19 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
mit456commented, Jun 6, 2017

I will try removing everything and then installing from scratch if that helps I will let you know.

0reactions
xmfbitcommented, Oct 13, 2017

Share my solution here. However I don’t know why… The environment is Ubuntu16.04 with CUDA8.0, GCC version of system is 5.4.0. And GCC version of Anaconda is 4.8.5. I installed PyTorch from source following the instructions on its GitHub repo page.

  1. I add the two lines in ~/.zshrc file (if you are using bash, modify ~/.bashrc instead).
export CC="/usr/bin/gcc"
export CXX="/usr/bin/g++"

Make sure that after you export Anaconda, the value of environment variable CC is /usr/bin/gcc.

# print the value of CC
echo $CC

If not, it will fail when building the warp-ctc C++ code.

  1. Inspired by @skaae, but I didn’t install GCC 4.8 (because the server is owned by the lab and I can’t modify things at will). I just run the following command:
conda install libgcc
  1. After building the warpctc code, you should export CUDA_HOME if you want to enable GPU before running python installation script.
export CUDA_HOME=/path/to/cuda
cd pytorch_binding
python setup.py install
  1. Test! After all of this, you can test it!
cd pytorch_binding/tests
python test.py

If it works well, it will print the gradient information like this:

➜  tests git:(pytorch_bindings) ✗ python test.py 
Variable containing:
 0.1770 -0.7081  0.1770  0.1770  0.1770
-0.8230  0.1770  0.2919  0.1770  0.1770
 0.2919  0.1770 -0.8230  0.1770  0.1770
-0.8213  0.1787  0.2665  0.1975  0.1787
[torch.cuda.FloatTensor of size 4x5 (GPU 0)]

.Variable containing:
 0.1770 -0.7081  0.1770  0.1770  0.1770
 0.1770 -0.8230  0.2919  0.1770  0.1770
 0.2919  0.1770 -0.8230  0.1770  0.1770
 0.1787  0.1787 -0.7335  0.1975  0.1787
[torch.cuda.FloatTensor of size 4x5 (GPU 0)]

.Variable containing:
 0.0000  0.0000  0.0000  0.0000  0.0000
 0.0000 -1.0000  1.0000  0.0000  0.0000
 1.0000  0.0000 -1.0000  0.0000  0.0000
 0.0000  0.0000  0.0000  0.0000  0.0000
[torch.cuda.FloatTensor of size 4x5 (GPU 0)]

.Variable containing:
 0.1770 -0.7081  0.1770  0.1770  0.1770
 0.1770  0.1770 -0.7081  0.1770  0.1770
[torch.cuda.FloatTensor of size 2x5 (GPU 0)]

.
----------------------------------------------------------------------
Ran 4 tests in 2.832s

OK
Read more comments on GitHub >

github_iconTop Results From Across the Web

DeepSpeech — Torchaudio nightly documentation - PyTorch
Parameters: x (torch.Tensor) – Tensor of dimension (batch, channel, time, feature). Returns:.
Read more >
Lambda Tensorbook - Unable to recognize GPU with PyTorch
I'm unable to recognize the GPU when installing PyTorch. Initially, I could recognize the GPU by rebooting, but that no longer works.
Read more >
Building an end-to-end Speech Recognition model in PyTorch
The complete guide on how to build an end-to-end Speech Recognition model in PyTorch. Train your own CTC Deep Speech model using this...
Read more >
Real Time Speech Recognition - Gradio
So, in this tutorial, we will also cover how to use state with Gradio demos. ... and pip install torch ); DeepSpeech (...
Read more >
Fine-tune Transformers Faster with Lightning Flash and Torch ...
Make sure to install Torch ORT, which requires certain CUDA versions to be installed (or optionally use Docker). Instructions here! Now that we ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found