It seems pytorch broke something again, which broked poetry
See original GitHub issue- Poetry version: 1.2.2
- Python version: 3.10.8
- OS version and name: Windows 10
- pyproject.toml: We use a fresh poetry setup for a new poetry project, the included steps to reproduce should lead to the same pyproject.toml
- I am on the latest stable Poetry version, installed using a recommended method.
- I have searched the issues of this repo and believe that this is not a duplicate.
- I have consulted the FAQ and blog for any relevant entries or release notes.
- If an exception occurs when executing a command, I executed it again in debug mode (
-vvvoption) and have included the output below.
Issue
We noticed yesterday 03.11.22 that we could not install pytorch on a new system, because one dependency could not be satisfied. It was nvidia-cudnn-cu11. So I reproduced by following steps the setup and ended up with the same issue:
$ poetry new test_torch_version_1.13.0
$ cd .\test_torch_version_1.13.0\
$ poetry add torch=1.13.0
which results always in the following error:
Creating virtualenv test-torch-version-1-13-0 in D:\Mitarbeiter\Kaupenjohann\15_python_ws\test_torch_version_1.13.0\.venv
Updating dependencies
Resolving dependencies...
Package operations: 6 installs, 1 update, 0 removals
• Updating setuptools (65.3.0 -> 65.5.0)
• Installing nvidia-cublas-cu11 (11.10.3.66)
• Installing nvidia-cuda-nvrtc-cu11 (11.7.99)
• Installing nvidia-cuda-runtime-cu11 (11.7.99)
• Installing nvidia-cudnn-cu11 (8.5.0.96)
• Installing typing-extensions (4.4.0)
RuntimeError
Unable to find installation candidates for nvidia-cudnn-cu11 (8.5.0.96)
at ~\AppData\Roaming\pypoetry\venv\lib\site-packages\poetry\installation\chooser.py:103 in choose_for
99│
100│ links.append(link)
101│
102│ if not links:
→ 103│ raise RuntimeError(f"Unable to find installation candidates for {package}")
104│
105│ # Get the best link
106│ chosen = max(links, key=lambda link: self._sort_key(package, link))
107│
So we tinkered further and analyzed the poetry.lock and realized the dependencies for torch=1.13.0:
[package.dependencies]
nvidia-cublas-cu11 = "11.10.3.66"
nvidia-cuda-nvrtc-cu11 = "11.7.99"
nvidia-cuda-runtime-cu11 = "11.7.99"
nvidia-cudnn-cu11 = "8.5.0.96"
typing-extensions = "*"
Et voilá we found our troublemaker package. So we checked the metadata.files and realized:
[metadata.files]
nvidia-cudnn-cu11 = [
{file = "nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl", hash = "sha256:402f40adfc6f418f9dae9ab402e773cfed9beae52333f6d86ae3107a1b9527e7"},
{file = "nvidia_cudnn_cu11-8.5.0.96-py3-none-manylinux1_x86_64.whl", hash = "sha256:71f8111eb830879ff2836db3cccf03bbd735df9b0d17cd93761732ac50a8a108"},
]
only manylinux and no win_amd64 like the others. So we were curious. What are the dependencies for version 1.12.1?
poetry add torch=1.12.1
and surprise:
Updating dependencies
Resolving dependencies...
Writing lock file
Package operations: 1 install, 0 updates, 5 removals
• Removing nvidia-cublas-cu11 (11.10.3.66)
• Removing nvidia-cuda-nvrtc-cu11 (11.7.99)
• Removing nvidia-cuda-runtime-cu11 (11.7.99)
• Removing setuptools (65.5.0)
• Removing wheel (0.37.1)
• Installing torch (1.12.1)
the cuda dependencies are gone. We totally aware about that pytorch with cuda is a mess overall in poetry because:

But this problem appears also if pytorch-lightning or any other tool depends on pytorch and resolving such merge conflicts was absolutely catastrophic, since the newer version got autoresolved all the time…
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:5 (3 by maintainers)

Top Related StackOverflow Question
Detecting “bad” (in this case, mismatched) metadata is its own challenge; we will not have a robust and performant way to do so until PEP 658 is implemented by indexes. At that point, yes, we can fail with a accurate, detailed, and descriptive error message when this happens.
I know that poetry can’t do anything about bad metadata from a package, but maybe we can turn this into a feature that if bad metadata from a package appears, the error message includes also information about the complete dependency tree. This would help to analyze the issue faster. We were only able to figure this out because the huge time investment sweeping through the
poerty.lock.