No attribute '_mp_fn' when fine-tuning mbart for en-ro translation task using TPU
See original GitHub issueI followed the TPU example in the examples folder and found xla_spawn.py calls
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
and fine-tune.py does not have the “_mp_fn” found in some training scripts.
I get
Traceback (most recent call last):
File "examples/xla_spawn.py", line 72, in <module>
main()
File "examples/xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
AttributeError: module 'finetune' has no attribute '_mp_fn'
Tried to fix it by adding the _mp_fn:
#def _mp_fn(index):
# For xla_spawn (TPUs)
# pass
#main()
with and without args main(args)
but neither worked.
Issue Analytics
- State:
- Created 3 years ago
- Comments:11 (11 by maintainers)
Top Results From Across the Web
Fine-tuning for translation with facebook mbart-large-50
I am trying to use the facebook mbart-large-50 model to fine-tune for en-ro translation task. ... Encoding' object has no attribute 'keys'.
Read more >Fine-tuning mBART - Research - OpenNMT Forum
Hello! Is it possible to use OpenNMT-py or OpenNMT-tf to fine-tune mBART for machine translation? Thanks! Yasmin.
Read more >Fine-tune neural translation models with mBART
At fine-tuning time, we feed a full non-masked sentence to the encoder, and ask it to decode the corresponding pair in the other...
Read more >TensorFlow 2.0 - Running using TPU: AttributeError ...
I'm running code script designed for to work with TF 2.0 to generate predictions on a pre-trained BERT base model for an NLP...
Read more >MBART Pre-training And In-Domain Fine Tuning For Indic ...
In this paper we describe our submission to the multilingual Indic language translation task. “MultiIndicMT” under the team name “NICT-.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This is now supported by
Seq2SeqTrainer
which doesn’t use PL. See https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/finetune_tpu.shSo, lightning_base.py is not ready for TPU yet.