"--no-cuda" does not work
See original GitHub issueWhen using the --no-cuda argument, it returns an error.
(env) λ python pix2tex.py --no-cuda
Traceback (most recent call last):
File "H:\pytlat\ocr\pix2tex.py", line 84, in <module>
args, model, tokenizer = initialize(args)
File "H:\pytlat\ocr\pix2tex.py", line 33, in initialize
model.load_state_dict(torch.load(args.checkpoint))
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I use torch 1.7.+cpu, cuda version is not installed, and can’t use cuda.
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (5 by maintainers)
Top Results From Across the Web
Build problem: No CUDA toolset found · Issue #18 - GitHub
I have the following error when trying to build on windows: My environment: window 10 cmake 3.22 Visual Studio 2019 CUDA 11.3 Does...
Read more >CUDA compile problems on Windows, Cmake error
Seems like Nvidia cuda installer has some issues with installing the VS ... I just have the same issue of No CUDA toolset...
Read more >VS CUDA 10.2, No CUDA toolset found (#21523) · Issues
We are trying to port our legacy use of find_package(CUDA) to a more modern cmake using CUDA as a language, using in one...
Read more >NO CUDA HELP!!! - Adobe Support Community - 9638435
Run GPUSniffer.exe in terminal mode (I think that is correct, I am a PC person), and show us what it says. What version...
Read more >Error Message "No CUDA-capable device is detected ... - 华为云
An error similar to the following occurs during the running of the program:1. 'failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Oh you’re right, I don’t know how I missed that torchvision was the problem. The issue was that I had the Arch-package python-pytorch installed, while torch and torchvision were installed via pip. It seems that it tried to use the system-provided torch and the pip-provided torchvision. Uninstalling the python-pytorch package and reinstalling torch and torchvision via pip fixed it, and everything seems to work now. Thanks.
No problem! The thing with the file is a bug right now. I’ve discovered it a while back, but I’m currently not really allowed to commit to this repo. If you don’t have a nvidia GPU in your system, that is handeled automatically (–no-cuda isn’t needed, but also doesn’t change anything)
From the error message it looks like torchvision is the problem here. Try to reinstall that package