Error for cuda and cpu
See original GitHub issue/pytorch-seq2seq/seq2seq/models/EncoderRNN.py", line 68, in forward
embedded = self.embedding(input_var)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 479, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 113, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1283, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
Issue Analytics
- State:
- Created 5 years ago
- Comments:8
Top Results From Across the Web
model run fails with CUDA version error - b/c server is CPU ...
When I went to run it on our server, it failed out with complaints about insufficient CUDA drivers...which is b/c this server is...
Read more >How can I fix this expected CUDA got CPU error in PyTorch?
You are using nn.BatchNorm2d in a wrong way. BatchNorm is a layer, just like Conv2d. It has internal parameters and buffers.
Read more >CUDA out-of-mem error - Chaos Help Center
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >T69685 Cycles GPU+CPU error "CUDA error
When I try to set Cycles Rendering Device to CPU and GPU it doesn't work at all and does the same thing as...
Read more >Frequently Asked Questions — PyTorch 1.13 documentation
My model reports “cuda runtime error(2): out of memory” ... As the error message suggests, you have run out of memory on your...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

The wrong code is at line 75 of supervised_trainer.py and line 38 of evaluator.py. I change them to
device = torch.device('cuda:0') if torch.cuda.is_available() else -1and CUDA is available now.No, but you need to re-install the package after you do these corrections. Running
python setup.py installagain helped in my case.