AttributeError: 'Wav2VecCtc' object has no attribute 'remove_pretraining_modules'
See original GitHub issueDear authors of wav2vec2,
Thank you for the great work and for open-source the code and model.
I have question regarding to the fine-tuning the wav2v model code with my own dataset. I followed exactly what it said:
$ fairseq-hydra-train
distributed_training.distributed_port=$PORT
task.data=/path/to/data
model.w2v_path=/path/to/model.pt
–config-path /path/to/fairseq-py/examples/wav2vec/config/finetuning
–config-name base_100h
I have successfully run the code with path/to/model.pt to be Wav2Vec 2.0 Base | No finetuning (the first model in the table) and Wav2Vec 2.0 Large (LV-60) | No finetuning (the 9th-row model in the table)
However, I could not run it with any other models. it returns the following error. Looks like those already finetuned models have no “remove_pretraining_modules”. I am not sure how to fix it. It would be great if you have any hints.
Thank you very much for your help. Best, Shirley
[2020-11-20 18:18:32,657][fairseq.data.audio.raw_audio_dataset][INFO] - loaded 2748, skipped 0 samples
Traceback (most recent call last):
File “/home/fairseq/fairseq_cli/hydra_train.py", line 35, in hydra_main
distributed_utils.call_main(cfg, pre_main)
File “/home/fairseq/fairseq/distributed_utils.py", line 334, in call_main
main(cfg, **kwargs)
File “/home/fairseq/fairseq_cli/train.py", line 74, in main
model = task.build_model(cfg.model)
File “/home/fairseq/fairseq/tasks/audio_pretraining.py", line 200, in build_model
model = super().build_model(model_cfg)
File “/home/fairseq/fairseq/tasks/fairseq_task.py", line 282, in build_model
model = models.build_model(cfg, self)
File “/home/fairseq/fairseq/models/__init__.py", line 86, in build_model
return model.build_model(cfg, task)
File “/home/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 147, in build_model
w2v_encoder = Wav2VecEncoder(cfg, task.target_dictionary)
File “/home/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 304, in __init__
model.remove_pretraining_modules()
File "/root/miniconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'Wav2VecCtc' object has no attribute 'remove_pretraining_modules'
Issue Analytics
- State:
- Created 3 years ago
- Comments:10
Top GitHub Comments
Hi, Could you please shed light on how you modify the fine-tuned model to checkpoint_last? Thanks
Hi,
You are right, the “w2v_path” can be set as wav2vec2_vox_960_new.pt or another pretrained model trained by yourself. All in all, you should make sure the model used here is a wav2vec model not a wav2vec_ctc model.
Good luck!
------------------ 原始邮件 ------------------ 发件人: “pytorch/fairseq” @.>; 发送时间: 2021年4月10日(星期六) 上午10:12 @.>; @.@.>; 主题: Re: [pytorch/fairseq] AttributeError: ‘Wav2VecCtc’ object has no attribute ‘remove_pretraining_modules’ (#2929)
@ksingla025, in hydra config you do like this: checkpoint: finetune_from_model: /hdd/fairseq/eval/wav2vec2_vox_960h_new.pt no_epoch_checkpoints: true best_checkpoint_metric: wer … model: _name: wav2vec_ctc w2v_path: /hdd/fairseq/eval/wav2vec_vox_new.pt apply_mask: true mask_prob: 0.5 mask_channel_prob: 0.5 mask_channel_length: 64 layerdrop: 0.1 activation_dropout: 0.1 feature_grad_mult: 0.0 freeze_finetune_updates: 10000
Hi, Nick. I am finetuning the wav2vec2_vox_960_new.pt with my own corpus and I am a newbie to fairseq. I wonder what is your w2v_path? I thought it should also be the path to wav2vec2_vox_960_new.pt. Could you please explain that?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.