question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Wav2vec2 Pretraining issue

See original GitHub issue

System Info

transformers version-4.11.3

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, …)
  • My own task or dataset (give details below)

Reproduction

https://colab.research.google.com/drive/1kepA7ryMG7YmNtSYjiJjBM984KRbpZuV#scrollTo=LdIxS2EEgMmz

Expected behavior

I tried to run the pre training demo of wav2vec2 on libri speech but i run into this error
of unrecognised arguments:/ or no module found 'transformers.modeling_outputs

Checklist

Issue Analytics

  • State:closed
  • Created 10 months ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
sguggercommented, Nov 14, 2022

The script you are using probably requires a more recent version of Transformers. cc @sanchit-gandhi

1reaction
Kshitizkhandelcommented, Nov 17, 2022

Pre-training is certainly possible, you just need a lot of disk space and compute time for it to be worthwhile!

If you want to try something new, you can check out the Whisper model from OpenAI 😉 https://huggingface.co/blog/fine-tune-whisper This gets very good results with very little fine-tuning!

Yeah, I read your lucid and well articulated blog on it. Great work!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Enable Wav2Vec2 Pretraining · Issue #11246 - GitHub
The popular Wav2Vec2 model cannot be pretrained using the Hugging Face library yet. During the fine-tuning week, multiple people have reported ...
Read more >
Pre-training for Wav2Vec2-XLSR via Huggingface - Models
Hi guys! I note that the most topics are related to fine-tuning a pre-trained model. But if I have got some new unlabeled...
Read more >
Pretraining Wav2Vec2 on Cloud TPU with PyTorch
Pretraining Wav2Vec2 on Cloud TPU with PyTorch · Objectives · Costs · Before you begin · Set up a Compute Engine instance ·...
Read more >
Applying Wav2vec2.0 to Speech Recognition in Various Low ...
Paper tables with annotated results for Applying Wav2vec2.0 to Speech Recognition in ... Table 4: Supervised pre-training VS self-supervised pre-training.
Read more >
A Noise-Robust Self-supervised Pre-training Model Based ...
To avoid this issue, in this work we propose an enhanced wav2vec2.0 model. Specifically, the noisy speech and the corresponding clean ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found