question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

torch and cuda version?

See original GitHub issue

Hello,

Thanks for sharing your this project. I had some issues running the examples and seem to be related to torch version. I am currently using torch 1.2.0 and CUDA10.0, could you please tell me the latest torch and cuda version used with this project?

I am getting errors like this:

(.mpc) user@userlabpc:~/git_clone/mpc.pytorch/examples$ python pendulum.py 
Tmp dir: /tmp/tmp7hokco9j
  0%|                                                                                      | 0/100 [00:00<?, ?it/s]/pytorch/torch/csrc/autograd/python_function.cpp:638: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
  0%|                                                                                      | 0/100 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "pendulum.py", line 80, in <module>
    )(x, QuadCost(Q, p), dx)
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/mpc.py", line 265, in forward
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/mpc.py", line 362, in solve_lqr_subproblem
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/lqr_step.py", line 112, in forward
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/lqr_step.py", line 303, in lqr_backward
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/pnqp.py", line 31, in pnqp
  File "/home/user/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/torch/tensor.py", line 325, in __rsub__
    return _C._VariableFunctions.rsub(self, other)
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `bitwise_not()` operator instead.
Traceback (most recent call last):
  File "/home/fer/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/fer/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/mpc.py", line 265, in forward
  File "/home/fer/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/mpc.py", line 362, in solve_lqr_subproblem
  File "/home/fer/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/lqr_step.py", line 114, in forward
  File "/home/fer/git_clone/mpc.pytorch/.mpc/lib/python3.5/site-packages/mpc-0.0.3-py3.5.egg/mpc/lqr_step.py", line 344, in lqr_forward
AttributeError: module 'torch' has no attribute 'any'

Thank you very much.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
bamoscommented, Sep 7, 2019

The GPU’s usually useful for OptNet/MPC acceleration here if you’re solving many problems at the same time since the operations are batched, but otherwise if you’re just solving a single problem it’s probably not going to be faster than the CPU version – no cpu-gpu transfers happen that slows it down, just the sequential nature.

Also the torch JIT wasn’t really around/used much when I was developing this library but it should give very significant performance gains here on the CPU and GPU

0reactions
juanmedcommented, Sep 7, 2019

I see, there is something I have not yet understand about your Optnet and Differentiable MPC papers then. I thought about doing trajectory-tracking real-time control using MPC for a linear system on the GPU based on your work.

It should work if you pass in all cuda tensors: https://github.com/locuslab/mpc.pytorch/blob/master/tests/test_mpc.py#L274

But the sequential/iterative nature of the code doesn’t make it very GPU-amenable so you might not see much gains from running on the GPU

Could you please be more specific on the issue here? Are you talking about the problem of cpu-gpu loading/offloading in a loop? Thanks.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Previous PyTorch Versions
Linux and Windows. # CUDA 10.2 # NOTE: PyTorch LTS version 1.8.2 is only supported for Python <= 3.8. conda install pytorch torchvision...
Read more >
Pytorch detection of CUDA - python
torch.version.cuda = 10.1 :) I had in total of 0 exceptions, 0 errors. Seemed smooth as long as I nodded [Y] to the ......
Read more >
How to Check PyTorch CUDA Version Easily
PyTorch has a torch.cuda package to set up and execute CUDA operations effectively. The module keeps track of the currently selected GPU, and ......
Read more >
PyTorch Release 22.09
Torch -TRT is the TensorRT integration for PyTorch and brings the ... The following table shows what versions of Ubuntu, CUDA, PyTorch, ...
Read more >
Can't install (build) with latest PyTorch-1.11 and CUDA-11.3
pip3 install torch torchvision torchaudio --extra-index-url ... torch.version.cuda)) RuntimeError: The detected CUDA version (11.4) ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found