question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RTX 3090 not compatible

See original GitHub issue

I’ve set up the environment according to the instructions. But when I train the model on my dataset with this script: python main.py --num_classes 5 --num_points 200 --model pointMLP --workers 1 An error occurs:

Traceback (most recent call last):
  File "main.py", line 273, in <module>
    main()
  File "main.py", line 84, in main
    net = net.to(device)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 899, in to
    return self._apply(convert)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
    param_applied = fn(param)
  File "/home/ccbien/.conda/envs/point-mlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 897, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

The following is the output of command ‘nvidia-smi’:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01    Driver Version: 455.45.01    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 3090    On   | 00000000:01:00.0 Off |                  N/A |
| 45%   53C    P0   141W / 350W |      1MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I’ve tried with different versions of pytorch, cudatoolkit (10.2, 11.3) but got the same error.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
etaoxingcommented, Mar 26, 2022

I’ve also run into this issue. To fix this, I removed the dependency on pointnet2_ops, since it was only used once: https://github.com/ma-xu/pointMLP-pytorch/blob/c1d6235405a8e53027d5afa1349a368788fa2469/classification_ModelNet40/models/pointmlp.py#L159-L170

For any interested:

        from pytorch3d.ops import sample_farthest_points, knn_points
        # could also switch to pytorch_geometric

        # fps_idx = torch.multinomial(torch.linspace(0, N - 1, steps=N).repeat(B, 1).to(xyz.device), num_samples=self.groups, replacement=False).long()
        fps_idx = farthest_point_sample(xyz, self.groups).long()
        # fps_idx = pointnet2_utils.furthest_point_sample(xyz, self.groups).long()  # [B, npoint]
        new_xyz, fps_idx = sample_farthest_points(xyz, K=self.groups)
        # new_xyz = index_points(xyz, fps_idx)  # [B, npoint, 3]
        new_points = index_points(points, fps_idx)  # [B, npoint, d]

        # idx = knn_point(self.kneighbors, xyz, new_xyz)
        _, idx, _ = knn_points(new_xyz, xyz, K=self.kneighbors, return_nn=False)
        # idx = query_ball_point(radius, nsample, xyz, new_xyz)
        grouped_points = index_points(points, idx)  # [B, npoint, k, d]

        if self.use_xyz:
            grouped_xyz = index_points(xyz, idx)  # [B, npoint, k, 3]
            grouped_points = torch.cat([grouped_points, grouped_xyz],dim=-1)  # [B, npoint, k, d+3]
3reactions
ma-xucommented, Mar 26, 2022

@etaoxing Thanks a lot!

Read more comments on GitHub >

github_iconTop Results From Across the Web

GeForce RTX 3090 with CUDA capability sm_86 is not ...
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install ...
Read more >
Will the new GeForce RTX 3090 require a new motherboard?
No, the Geforce RTX 3090 will not require a new motherboard, since it is both PCI fourth and third generation compatible. You probably...
Read more >
GeForce RTX 3090 with CUDA capability sm_86 is ... - GitHub
GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities ...
Read more >
RTX 3090 Ti not compatible with Windows Mixed Reality
I have a new EVA 3090 Ti FTW3 Ultra and an HP Reverb G2 headset (revised version). Was working fine with previous graphics...
Read more >
Myth busted: The PCBs of the GeForce RTX 3090 Ti are NOT ...
Myth busted: The PCBs of the GeForce RTX 3090 Ti are NOT compatible with the AD102 “Ada” The GeForce RTX 3090 Ti PCBs...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found