Crash after setting PyTorch default tensor type to cuda.FloatTensor
See original GitHub issueHello folks at Facebook, setting PyTorch default tensor type to cuda.FloatTensor leads Ax to crashing in a Colab GPU instance.
You can checkout a complete test setup in Colab. This is the culprit:
torch.set_default_tensor_type(torch.cuda.FloatTensor)
Keep up the great work, Enrico Bonetti Vieno
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Is there anything wrong with setting default tensor type to cuda?
I ran into a problem with allennlp 's ElmoEncoder when I had set the default tensor to cuda.FloatTensor . In case anyone else...
Read more >If I'm not specifying to use CPU/GPU, which one is my script ...
PyTorch defaults to the CPU, unless you use the .cuda() methods on your models and the torch.cuda.XTensor variants of PyTorch's tensors.
Read more >PyTorch vs Apache MXNet
To install Apache MXNet with GPU support, you need to specify CUDA version. ... Returns a copy of the tensor after casting to...
Read more >pytorch runtimeerror: expected scalar type long but found float - You ...
LongTensor is synonymous with integer. PyTorch won't accept a FloatTensor as categorical target, so it's telling you to cast your tensor to LongTensor...
Read more >PyTorch 1.6.0 Now Available | Exxact Blog
... first convert the tensor to a long tensor, then to float tensor. ... Fixed the crash problem when using BuildExtension.with_options ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
See https://github.com/pytorch/pytorch/issues/32494 for the issue and https://github.com/pytorch/pytorch/pull/32496 for the fix.
Thank you for the fast resolution, great job! 💪