question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

mt5 getting nans with fp16

See original GitHub issue

Environment info

  • transformers version: 4.4.2
  • Platform: linux
  • Python version: 3.7
  • PyTorch version (GPU?): 1.8
  • Tensorflow version (GPU?): -
  • Using GPU in script?: -
  • Using distributed or parallel set-up in script?: -

Who can help

t5: @patrickvonplaten, @patil-suraj

Information

I am using mt5-small model:

  • the problem arises when using fp16 with mt5

The tasks I am working on is:

  • translation

To reproduce

Steps to reproduce the behavior:

python run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir test/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_train_samples 100 --fp16

outputs:

***** eval metrics *****
  epoch                     =     3.0
  eval_bleu                 =  0.0039
  eval_gen_len              =    2.95
  eval_loss                 =     nan
  eval_mem_cpu_alloc_delta  =     4MB
  eval_mem_cpu_peaked_delta =     5MB
  eval_mem_gpu_alloc_delta  =     0MB
  eval_mem_gpu_peaked_delta =  1080MB
  eval_runtime              = 72.1865
  eval_samples              =    1999
  eval_samples_per_second   =  27.692

Expected behavior

being able to use fp16 with mt5 models. Thank you very much for your help, this is really crucial for me to be able to run these models with fp16 to be able to fit more data into old GPUs I have access to and I appreciate a lot your help.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
dorost1234commented, Mar 31, 2021

Dear @stas00 I tested more codes, without deepspeed, it works fine with setting the feedforward layer to float32, as suggested in the PR, but the moment I switch to deepspeed I still get nan issue in my codes. I greatly appreciate if you can spare some moments from your precious time and provide me with a suggestion for the case of deepspeed for the same problem. Thank you very much

I also used your debug codes:

^M  0%|          | 0/38600 [00:00<?, ?it/s]WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 5 has inf
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerFF has inf
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop end has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Stack loop start has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block after T5LayerSelfAttention has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5Block before T5LayerFF has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm variance has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:T5LayerNorm hidden_states before return has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 1 has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 2 has nans
WARNING:seq2seq.third_party.models.t5.debug_utils:gelu 3 has nans

1reaction
dorost1234commented, Apr 2, 2021

Dear @stas00 I tested the code more (without deepspeed) on larger scale and when I train on opus100 (I train on 20 languages of it), after 2000 iterations with mt5-small, after applying the fix, this gets nan still. I will share with you a reproducible code soon. thanks a lot for all the great work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

T5 fp16 issue is fixed - Transformers - Hugging Face Forums
Previously, there was an issue when using T5 models in fp16 ; it was producing nan loss and logits . Now on the...
Read more >
Training Loss = 0.0, Validation Loss = nan - PyTorch Forums
Hello, I am training a model, but the training loss is zero and the validation loss is nan. This only happened when I...
Read more >
[Discussion] How to fix Mixed Precision causing NaNs
In other words, FP16 dynamic range is sufficient for training, ... to move them into the range to keep them from becoming zeros...
Read more >
How to avoid huggingface t5-based seq to seq suddenly ...
Furthermore, usually, losses seem to become nan after they start getting higher and higher, but in this case, the model seems to be...
Read more >
Mixed precision training - Advanced (Part 1 v3) - Fast.ai forums
When I start with fp16() it helps me speed up by epochs by ~25% but later on, I am getting a bunch of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found