question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Reproducing WMT 14 En-Fr (Transformer)

See original GitHub issue

Hi,

I’m trying to reproduce the WMT 14 En-Fr results from the “Scaling NMT” paper. It worked out for WMT 14 En-De with the provided preprocessing script and hyper-parameters. However, for WMT 14 En-Fr, the PPL is going up and down. My command:

python3.6 train.py data-bin/wmt14_en_fr_joined_dict --arch transformer_vaswani_wmt_en_fr_big --share-all-embeddings --optimizer Adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --lr-scheduler inverse_sqrt --warmup-init-lr 1e-07 --warmup-updates 4000 --lr 0.001 --min-lr 1e-09 --dropout 0.1 --weight-decay 0.0 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --max-tokens 3584 --update-freq 16

Any suggestions for a better set of parameters?

Cheers, Stephan

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:12 (4 by maintainers)

github_iconTop GitHub Comments

3reactions
starsplashcommented, Aug 14, 2018

Good news! I could reproduce your results on WMT En-Fr after switching to pytorch version v0.4.1 (43.1 BLEU on newstest14).

However, OOM is still around 0.10. To avoid that I tried a batch size of 4096 and got same results.

Here is log for batch size 5120:

| epoch 002 | valid on 'valid' subset | valid_loss 3.60887 | valid_nll_loss 1.81177 | valid_ppl 3.51 | num_updates 4295 | best 3.60887
| epoch 003 | valid on 'valid' subset | valid_loss 3.35868 | valid_nll_loss 1.5761 | valid_ppl 2.98 | num_updates 6446 | best 3.35868
| epoch 004 | valid on 'valid' subset | valid_loss 3.2832 | valid_nll_loss 1.49871 | valid_ppl 2.83 | num_updates 8597 | best 3.2832
| epoch 005 | valid on 'valid' subset | valid_loss 3.2473 | valid_nll_loss 1.45568 | valid_ppl 2.74 | num_updates 10747 | best 3.2473
| epoch 006 | valid on 'valid' subset | valid_loss 3.21693 | valid_nll_loss 1.43691 | valid_ppl 2.71 | num_updates 12897 | best 3.21693
| epoch 007 | valid on 'valid' subset | valid_loss 3.20082 | valid_nll_loss 1.40804 | valid_ppl 2.65 | num_updates 15048 | best 3.20082
| epoch 008 | valid on 'valid' subset | valid_loss 3.1874 | valid_nll_loss 1.39752 | valid_ppl 2.63 | num_updates 17198 | best 3.1874
| epoch 009 | valid on 'valid' subset | valid_loss 3.17369 | valid_nll_loss 1.38668 | valid_ppl 2.61 | num_updates 19348 | best 3.17369
| epoch 010 | valid on 'valid' subset | valid_loss 3.16504 | valid_nll_loss 1.37717 | valid_ppl 2.60 | num_updates 21498 | best 3.16504
| epoch 011 | valid on 'valid' subset | valid_loss 3.15988 | valid_nll_loss 1.36966 | valid_ppl 2.58 | num_updates 23648 | best 3.15988
| epoch 012 | valid on 'valid' subset | valid_loss 3.15066 | valid_nll_loss 1.36187 | valid_ppl 2.57 | num_updates 25798 | best 3.15066
| epoch 013 | valid on 'valid' subset | valid_loss 3.14477 | valid_nll_loss 1.35529 | valid_ppl 2.56 | num_updates 27949 | best 3.14477
| epoch 014 | valid on 'valid' subset | valid_loss 3.14406 | valid_nll_loss 1.35602 | valid_ppl 2.56 | num_updates 30099 | best 3.14406
| epoch 015 | valid on 'valid' subset | valid_loss 3.13829 | valid_nll_loss 1.3448 | valid_ppl 2.54 | num_updates 32249 | best 3.13829
| epoch 016 | valid on 'valid' subset | valid_loss 3.13028 | valid_nll_loss 1.34388 | valid_ppl 2.54 | num_updates 34399 | best 3.13028
| epoch 017 | valid on 'valid' subset | valid_loss 3.12723 | valid_nll_loss 1.33635 | valid_ppl 2.53 | num_updates 36549 | best 3.12723
| epoch 018 | valid on 'valid' subset | valid_loss 3.12526 | valid_nll_loss 1.3339 | valid_ppl 2.52 | num_updates 38699 | best 3.12526
| epoch 019 | valid on 'valid' subset | valid_loss 3.12183 | valid_nll_loss 1.33137 | valid_ppl 2.52 | num_updates 40848 | best 3.12183
| epoch 020 | valid on 'valid' subset | valid_loss 3.11855 | valid_nll_loss 1.33257 | valid_ppl 2.52 | num_updates 42998 | best 3.11855
| epoch 021 | valid on 'valid' subset | valid_loss 3.11717 | valid_nll_loss 1.32722 | valid_ppl 2.51 | num_updates 45149 | best 3.11717
| epoch 022 | valid on 'valid' subset | valid_loss 3.11449 | valid_nll_loss 1.3253 | valid_ppl 2.51 | num_updates 47299 | best 3.11449
| epoch 023 | valid on 'valid' subset | valid_loss 3.11412 | valid_nll_loss 1.32066 | valid_ppl 2.50 | num_updates 49449 | best 3.11412
| epoch 024 | valid on 'valid' subset | valid_loss 3.11065 | valid_nll_loss 1.32076 | valid_ppl 2.50 | num_updates 51599 | best 3.11065
| epoch 025 | valid on 'valid' subset | valid_loss 3.10901 | valid_nll_loss 1.32106 | valid_ppl 2.50 | num_updates 53749 | best 3.10901
| epoch 026 | valid on 'valid' subset | valid_loss 3.10663 | valid_nll_loss 1.3212 | valid_ppl 2.50 | num_updates 55899 | best 3.10663
| epoch 027 | valid on 'valid' subset | valid_loss 3.10602 | valid_nll_loss 1.31865 | valid_ppl 2.49 | num_updates 58049 | best 3.10602
| epoch 028 | valid on 'valid' subset | valid_loss 3.10591 | valid_nll_loss 1.31143 | valid_ppl 2.48 | num_updates 60199 | best 3.10591
| epoch 029 | valid on 'valid' subset | valid_loss 3.10283 | valid_nll_loss 1.3149 | valid_ppl 2.49 | num_updates 62350 | best 3.10283
| epoch 030 | valid on 'valid' subset | valid_loss 3.10323 | valid_nll_loss 1.31422 | valid_ppl 2.49 | num_updates 64499 | best 3.10283
| epoch 031 | valid on 'valid' subset | valid_loss 3.09931 | valid_nll_loss 1.31243 | valid_ppl 2.48 | num_updates 66649 | best 3.09931
| epoch 032 | valid on 'valid' subset | valid_loss 3.09965 | valid_nll_loss 1.31103 | valid_ppl 2.48 | num_updates 68799 | best 3.09931
| epoch 033 | valid on 'valid' subset | valid_loss 3.09943 | valid_nll_loss 1.30802 | valid_ppl 2.48 | num_updates 70949 | best 3.09931
| epoch 034 | valid on 'valid' subset | valid_loss 3.09991 | valid_nll_loss 1.30598 | valid_ppl 2.47 | num_updates 73100 | best 3.09931
| epoch 035 | valid on 'valid' subset | valid_loss 3.09615 | valid_nll_loss 1.30697 | valid_ppl 2.47 | num_updates 75250 | best 3.09615
| epoch 036 | valid on 'valid' subset | valid_loss 3.09759 | valid_nll_loss 1.30637 | valid_ppl 2.47 | num_updates 77400 | best 3.09615
| epoch 037 | valid on 'valid' subset | valid_loss 3.09649 | valid_nll_loss 1.30375 | valid_ppl 2.47 | num_updates 79550 | best 3.09615
| epoch 038 | valid on 'valid' subset | valid_loss 3.09297 | valid_nll_loss 1.30488 | valid_ppl 2.47 | num_updates 81700 | best 3.09297```

0reactions
starsplashcommented, Aug 11, 2018

It seems I’m still not able to reproduce your setup. Here are more details about my current setup:

  • 32x4 V100s
  • CUDA 9.1
  • NCCL version 2.2.12+cuda9.1
  • current pytorch and fairseq master
Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff=None, arch='transformer_vaswani_wmt_en_de_big', attention_dropout=0.0, clip_norm=0.0, criterion='label_smoothed_cross_entropy', data='data-bin', decoder_attention_heads=16, decoder_embed_dim=1024, decoder_embed_path=None, decoder_ffn_embed_dim=4096, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, device_id=0, distributed_backend='nccl', distributed_init_method='tcp://myhost.com:31999', distributed_port=-1, distributed_rank=0, distributed_world_size=128, dropout=0.1, encoder_attention_heads=16, encoder_embed_dim=1024, encoder_embed_path=None, encoder_ffn_embed_dim=4096, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fp16=True, keep_interval_updates=-1, label_smoothing=0.1, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.0007], lr_scheduler='inverse_sqrt', lr_shrink=0.1, max_epoch=0, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=5120, max_update=0, min_loss_scale=0.0001, min_lr=1e-09, momentum=0.99, no_epoch_checkpoints=False, no_progress_bar=False, no_save=False, no_token_positional_embeddings=False, optimizer='adam', raw_text=False, relu_dropout=0.0, restore_file='checkpoint_last.pt', save_dir='output', save_interval=1, save_interval_updates=0, seed=2, sentence_avg=False, share_all_embeddings=True, share_decoder_input_output_embed=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='translation', train_subset='train', update_freq=[1], valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0) 
| [en] dictionary: 44512 types 
| [fr] dictionary: 44512 types 
| data-bin train 35760411 examples 
| data-bin valid 26853 examples 
| model transformer_vaswani_wmt_en_de_big, criterion LabelSmoothedCrossEntropyCriterion 
| num. model params: 221937664 
| training on 128 GPUs 
| max tokens per GPU = 5120 and max sentences per GPU = None
| epoch 001 | loss 6.523 | nll_loss 5.146 | ppl 35.40 | wps 602670 | ups 0.9 | wpb 617158 | bsz 16624 | num_updates 2147 | lr 0.000375771 | gnorm 1.305 | clip 100% | oom 0.105263 | loss_scale 16.000 | wall 2390 
| epoch 001 | valid on 'valid' subset | valid_loss 4.14312 | valid_nll_loss 2.40969 | valid_ppl 5.31 | num_updates 2147 
| epoch 002 | loss 4.145 | nll_loss 2.451 | ppl 5.47 | wps 614640 | ups 1.0 | wpb 617150 | bsz 16624 | num_updates 4296 | lr 0.000675454 | gnorm 0.979 | clip 100% | oom 0.0977654 | loss_scale 4.000 | wall 4562 
| epoch 002 | valid on 'valid' subset | valid_loss 3.59712 | valid_nll_loss 1.8516 | valid_ppl 3.61 | num_updates 4296 | best 3.59712
| epoch 003 | loss 3.469 | nll_loss 1.723 | ppl 3.30 | wps 614948 | ups 1.0 | wpb 617154 | bsz 16625 | num_updates 6447 | lr 0.000551378 | gnorm 0.711 | clip 100% | oom 0.101443 | loss_scale 8.000 | wall 6762 
| epoch 003 | valid on 'valid' subset | valid_loss 3.39631 | valid_nll_loss 1.65069 | valid_ppl 3.14 | num_updates 6447 | best 3.39631
| epoch 004 | loss 3.346 | nll_loss 1.593 | ppl 3.02 | wps 598255 | ups 1.0 | wpb 617154 | bsz 16625 | num_updates 8598 | lr 0.000477452 | gnorm 0.573 | clip 100% | oom 0.0967667 | loss_scale 16.000 | wall 9015 
| epoch 004 | valid on 'valid' subset | valid_loss 3.32489 | valid_nll_loss 1.5773 | valid_ppl 2.98 | num_updates 8598 | best 3.32489 

It seems my OOM is higher than yours (0.10 vs. 0.02). Which CUDA version are you using? Any other ideas?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Reproducing WMT 14 En-Fr (Transformer) · Issue #233 - GitHub
I'm trying to reproduce the WMT 14 En-Fr results from the "Scaling NMT" paper. It worked out for WMT 14 En-De with the...
Read more >
incorporating bert into neural machine translation - arXiv
For the rich-resource scenario, we work on. WMT'14 En→De and En→Fr, whose corpus sizes are 4.5M and 36M respectively. We concate- nate ...
Read more >
wmt14 · Datasets at Hugging Face
We're on a journey to advance and democratize artificial intelligence through open source and open science.
Read more >
Machine Translation on WMT2014 English-French
Rank Model BLEU score SacreBLEU Year 1 Transformer+BT (ADMIN init) 46.4 44.4 2020 2 Noisy back‑translation 45.6 43.8 2018 3 mRASP+Fine‑Tune 44.3 41.7 2020
Read more >
(PDF) Incorporating BERT into Neural Machine Translation
The results of WMT'14 En→De and En→Fr are shown in Table 3. Our reproduced Transformer. matches the results reported in Ott et al....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found