question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: "bernoulli_scalar_cuda_" not implemented for 'torch.cuda.HalfTensor'

See original GitHub issue

Hi, I’ve been found that latest PyTorch and latest apex conflict with this error:

RuntimeError: "bernoulli_scalar_cuda_" not implemented for 'torch.cuda.HalfTensor'

and it requires some remapping the tables in amp model. Thank you!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
ngimelcommented, Nov 2, 2018

It still might be a good idea to use inplace=False, because that will trigger fused kernels for dropout, whereas inplace=True will fall back to a slower implementation. Difference in memory use is not that much.

0reactions
jinserkcommented, Nov 3, 2018

Oh I didn’t know that, thanks @ngimel !

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: "exp" not implemented for 'torch.LongTensor'
I happened to follow this tutorial too. For me I just got the torch.arange to generate float type tensor. from position = torch.arange(0, ......
Read more >
RuntimeError: "lu_cuda" not implemented for 'Half'
Hi All, I've been starting to run my code on a GPU and started to change the default dtype via torch.set_default_type(torch.half).
Read more >
View not implemented for type torch.HalfTensor
Trying to use views with halfTensors, I get this error: >>> import torch >>> torch.__version__ '1.0.1.post2' >>> t = torch.tensor([1,2,3,4.]
Read more >
RuntimeError: arange_out not supported on CPUType for Half
HalfTensor ' and device(type='cuda', index=1) so everything should be in GPU memory … I dont understand what is the CPU data causing the...
Read more >
"addcmul_cuda" not implemented for 'ComplexFloat' - complex
The input, output and trainable weights in this module are with dtype=torch.complex64. When training this module with Adam optimizer on GPU ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found