question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

cross encoder fp16 amp training: use_amp=True didn't work

See original GitHub issue

I encountered a problem when I tried to use amp in cross encoder for fp16 training. My torch version==1.7.0, python==3.7 and GPU: image I just set use_amp=True like this: model.fit(train_dataloader=train_dataloader, evaluator=evaluator, epochs=num_epochs, warmup_steps=warmup_steps, output_path=model_save_path, use_amp=True)

However, the speed of the training process is still the same as when I set use_amp=False. I don’t know the reason. Could you help me fix this problem?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
nreimerscommented, Sep 7, 2022

They don’t support fp16

0reactions
liulizuelcommented, Sep 7, 2022

They don’t support fp16

Oh, thank you. That’s a little bad.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cross-Encoders — Sentence-Transformers documentation
A Cross-Encoder does not produce a sentence embedding. ... Note, Cross-Encoder do not work on individual sentence, you have to pass sentence pairs....
Read more >
Train With Mixed Precision - NVIDIA Documentation Center
Mixed precision is the combined use of different numerical precisions in a computational method. Half precision (also known as FP16) data ...
Read more >
How To Fit a Bigger Model and Train It Faster - Hugging Face
This section gives brief ideas on how to make training faster and support bigger models ... While normally inference is done with fp16/amp...
Read more >
Automatic Mixed Precision (AMP) Training
Mixed Precision Training (ICLR 2018). ... Why would FP16 training diverge? ... A: NVIDIA people have been working hard to port the idea...
Read more >
Introducing native PyTorch automatic mixed precision for ...
Accuracy: AMP (FP16), FP32. The advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found