how to pretrain wav2vec2 on my own audio using an existing model as a base?
See original GitHub issuei’m calling pretraining using
fairseq-hydra-train --run --config-name myconfig --config-dir examples/wav2vec/config/pretraining
which ends up getting stuck on the same loss of 6.658. It never improves on that no matter what I’ve tried.
Someone in another issue suggested using one of the base models for pretraining. I haven’t come across where in the docs it tells you how to do that? Any help appreciated.
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
Top Results From Across the Web
how to pretrain wav2vec2 on my own audio using an existing ...
i'm calling pretraining using fairseq-hydra-train --run ... how to pretrain wav2vec2 on my own audio using an existing model as a base?
Read more >Self-training and pre-training, understanding the wav2vec ...
first, pre-train a wav2vec 2.0 model on the unlabeled data (using self-supervised learning approach),; fine-tune it on the available labeled ...
Read more >Wav2Vec2 - Hugging Face
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Wav2Vec2 model was trained...
Read more >Fine-tune and deploy a Wav2Vec2 model for speech ...
The base model we use in this post is Wav2Vec2-Base-960h, fine-tuned on 960 hours of Librispeech on 16 kHz sampled speech audio.
Read more >Wav2Vec2 - A Lazy Data Science Guide - Mohit Mayank
To pre-train the model, Wav2Vec2 masks certain portions of time steps in the feature ... load the audio data (use your own wav...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Try @mailong25 repo for finetuning. Pretty good instructions to get you going. As for the parameters, if you don’t share information on them or what you are doing, it is hard to give any advice.
Hi @tensorfoo! You can change it in your config.yaml by setting
restore_file
in checkpoint atribute. If checkpoint stored outside of./checkpoints
(which is default dir for checkpoints) you should also setsave_dir
to proper dir.