RTX 3090 not compatible
See original GitHub issueI’ve set up the environment according to the instructions. But when I train the model on my dataset with this script:
python main.py --num_classes 5 --num_points 200 --model pointMLP --workers 1
An error occurs:
Traceback (most recent call last):
File "main.py", line 273, in <module>
main()
File "main.py", line 84, in main
net = net.to(device)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 899, in to
return self._apply(convert)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
module._apply(fn)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
param_applied = fn(param)
File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 897, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
The following is the output of command ‘nvidia-smi’:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A |
| 45% 53C P0 141W / 350W | 1MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I’ve tried with different versions of pytorch, cudatoolkit (10.2, 11.3) but got the same error.
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (4 by maintainers)
Top Results From Across the Web
GeForce RTX 3090 with CUDA capability sm_86 is not ...
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install ...
Read more >Will the new GeForce RTX 3090 require a new motherboard?
No, the Geforce RTX 3090 will not require a new motherboard, since it is both PCI fourth and third generation compatible. You probably...
Read more >GeForce RTX 3090 with CUDA capability sm_86 is ... - GitHub
GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities ...
Read more >RTX 3090 Ti not compatible with Windows Mixed Reality
I have a new EVA 3090 Ti FTW3 Ultra and an HP Reverb G2 headset (revised version). Was working fine with previous graphics...
Read more >Myth busted: The PCBs of the GeForce RTX 3090 Ti are NOT ...
Myth busted: The PCBs of the GeForce RTX 3090 Ti are NOT compatible with the AD102 “Ada” The GeForce RTX 3090 Ti PCBs...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve also run into this issue. To fix this, I removed the dependency on
pointnet2_ops
, since it was only used once: https://github.com/ma-xu/pointMLP-pytorch/blob/c1d6235405a8e53027d5afa1349a368788fa2469/classification_ModelNet40/models/pointmlp.py#L159-L170For any interested:
@etaoxing Thanks a lot!