How to load pre-trained wav2vec2.0 models
See original GitHub issue❓ Questions and Help
Before asking:
- search the issues.
- search the docs.
What is your question?
I would like to load pre-trained models for wav2vec2 as given in the instructions README (https://github.com/pytorch/fairseq/tree/master/examples/wav2vec). My goal is to export the pre-trained model to ONNX following the instructions given here: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html.
In order to run torch.onnx.export()
, I need the model as an input. I am having problems loading the pre-trained model in order to export it.
Code
I have tried the code snippets in the wav2vec2 README (https://github.com/pytorch/fairseq/tree/master/examples/wav2vec):
import torch
import fairseq
cp = torch.load('path/to/file/wav2vec_small_10m.pt', map_location=torch.device('cpu'))
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([cp])
I am getting the following error:
AttributeError: ‘dict’ object has no attribute ‘replace’
Looking at the documentation here: https://pytorch.org/tutorials/beginner/saving_loading_models.html, I tried
# Model class must be defined somewhere
model = torch.load(PATH)
model.eval()
But not sure how to exactly get the model in order to pass it to torch.onnx.export()
What have you tried?
- Tried loading the pre-trained model with the code snippet above as detailed in https://github.com/pytorch/fairseq/tree/master/examples/wav2vec
- Tried Loading the pre-trained model as outlined in https://pytorch.org/tutorials/beginner/saving_loading_models.html
What’s your environment?
- fairseq Version (e.g., 1.0 or master): master
- PyTorch Version (e.g., 1.0): 1.7.0
- OS (e.g., Linux): macOS 10.15.7
- How you installed fairseq (
pip
, source): source - Build command you used (if compiling from source):
CFLAGS="-stdlib=libc++"; pip install --editable ./
- Python version: Python 3.8.5
- CUDA/cuDNN version: None
- GPU models and configuration: None
- Any other relevant information:
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:13 (3 by maintainers)
Top GitHub Comments
Resolved by doing the following:
Testing with the wav2vec2.0 model with the same prescription, I get the error:
@alexeib Could you please clarify how to pass the letter dictionary to the model loading sequence.
I run