question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error for cuda and cpu

See original GitHub issue
/pytorch-seq2seq/seq2seq/models/EncoderRNN.py", line 68, in forward
    embedded = self.embedding(input_var)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 479, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 113, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1283, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:8

github_iconTop GitHub Comments

5reactions
v587sucommented, May 21, 2019

The wrong code is at line 75 of supervised_trainer.py and line 38 of evaluator.py. I change them to device = torch.device('cuda:0') if torch.cuda.is_available() else -1 and CUDA is available now.

0reactions
isekuliccommented, May 24, 2021

The wrong code is at line 75 of supervised_trainer.py and line 38 of evaluator.py. I change them to device = torch.device('cuda:0') if torch.cuda.is_available() else -1 and CUDA is available now.

Except the two files, is there any other file need to fix? After fix the two files, the code still be incorrect…

No, but you need to re-install the package after you do these corrections. Running python setup.py install again helped in my case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

model run fails with CUDA version error - b/c server is CPU ...
When I went to run it on our server, it failed out with complaints about insufficient CUDA drivers...which is b/c this server is...
Read more >
How can I fix this expected CUDA got CPU error in PyTorch?
You are using nn.BatchNorm2d in a wrong way. BatchNorm is a layer, just like Conv2d. It has internal parameters and buffers.
Read more >
CUDA out-of-mem error - Chaos Help Center
This error message indicates that a project is too complex to be cached in the GPU's memory. Each project contains a certain amount...
Read more >
T69685 Cycles GPU+CPU error "CUDA error
When I try to set Cycles Rendering Device to CPU and GPU it doesn't work at all and does the same thing as...
Read more >
Frequently Asked Questions — PyTorch 1.13 documentation
My model reports “cuda runtime error(2): out of memory” ... As the error message suggests, you have run out of memory on your...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found