question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

wav2vec pre-trained models require inconsistent versions of fairseq

See original GitHub issue

The pre-trained models available for the wav2vec flavors use outdated, inconsistent, and undocumented versions of fairseq.

First of all, which version of fairseq is required for each of the pre-trained models is not documented on this README. Users may expect that this would be (a) current master branch on the GitHub, or (b) whatever version of fairseq was automatically chosen by the user’s Python package manager (e.g. pip install fairseq, which in my case gave me 0.10.2). Unfortunately, neither of these assumptions is correct for all of the pre-trained models.

The models listed under wav2vec 2.0 Base and Large are compatible with 0.10.2. (I have not tried all of them.) They fail to initialize with a GitHub based fairseq installation.

The XLSR-53 model, which is the one I actually wanted to use, is compatible with neither 0.10.2 nor top-of-tree at time of writing.

I have not tried wav2vec 1.0 or vq-wav2vec, but the required versions for those is also not documented.

The easiest way to solve this would be to document the required versions for each of these pre-trained downloads, preferably with a link to the GitHub tag for the needed repo state. Better yet would be an update to the pre-trained models every time the repo is updated in such a way that they break, and links to all the historical versions of the models so users stuck on a particular version for some reason can download the model version compatible with their fairseq version.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

5reactions
GreatDarrenSuncommented, May 25, 2021

I think I have found the answer,I downloaded version 7061a0f of master,reinstall it,and the error is gone

0reactions
stale[bot]commented, Apr 28, 2022

Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you!

Read more comments on GitHub >

github_iconTop Results From Across the Web

wav2vec 1.0 cannot replicate pretrained wav2vec_large.pt ...
Generally: cannot replicate model wav2vec_large.pt so any ideas would be great How long ... fairseq Version: main; PyTorch Version: 1.9.0 ...
Read more >
fairseq/examples/wav2vec/unsupervised/README.md
The wav2vec-U training procedure consists of three consecutive main steps: ... splits different than train/valid/test, you will need to modify this script.
Read more >
Self-training and pre-training, understanding the wav2vec ...
If a pre-trained model captures the structure of speech, then it should require few labeled examples to fine-tune it for speech recognition. The ......
Read more >
Source code for speechbrain.lobes.models.fairseq_wav2vec
Source code for speechbrain.lobes.models.fairseq_wav2vec. """This lobe enables the integration of fairseq pretrained wav2vec models. Reference: https://arxiv.
Read more >
wav2vec: Unsupervised Pre-Training for Speech Recognition
Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance (Amodei et al.,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found