Saving wav2vec2 model results in serialization error
See original GitHub issueHi,
I am trying to save the compiled model using the code below:
w2v = torch.load(model_path)
model = Wav2VecCtc.build_model(args, target_dict)
model.load_state_dict(w2v["model"], strict=True)
torch.save(model, 'test_again.pt' )
But whenever I execute this code, the model is built successfully but it throws an error:
This error I did not encounter in non hydra version of fairseq. Can you please let me know what can be done??
Thanks!
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Saving wav2vec2 model results in serialization error #3503
Hi, I am trying to save the compiled model using the code below: w2v = torch.load(model_path) model = Wav2VecCtc.build_model(args, ...
Read more >Issues saving and loading wav2vec2 models fine tuned using ...
After training some toy models, I realized that I couldn't load from the checkpoints or save and reload the model in the same...
Read more >Running out of memory with pytorch - Stack Overflow
After each model finishes their job, DataParallel collects and merges the results before returning it to you.
Read more >Saving and loading models for inference in PyTorch
Saving the model's state_dict with the torch.save() function will give you ... The disadvantage of this approach is that the serialized data is...
Read more >Wav2Vec2.0 on the Edge: Performance Evaluation | DeepAI
02/12/22 - Wav2Vec2.0 is a state-of-the-art model which learns speech ... The python based modules are optimized, serialized and saved in ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@villmow @harveenchadha Hi there,
Thank you very much for your solution, but it still doesn’t work on my code.
I insert your snippt into line 460 at fairseq/fairseq/dataclass/utils.py and my function look like this now:
Is this the right way to use your snippt?
I am training my custom task using multiple GPUs and encountered a similar error.
More details can be found at https://github.com/pytorch/fairseq/issues/3634
Thanks again.
I receive the same error when training a model with
fairseq-hydra-train
on multiple GPUs. Single GPU works.