Partial metadata gathering from legacy sources based on markers/constraints
See original GitHub issue- Poetry version: 1.2.0
- Python version: 3.10
- OS version and name: Manjaro (linux)
- I am on the latest stable Poetry version, installed using a recommended method.
- I have searched the issues of this repo and believe that this is not a duplicate.
- I have consulted the FAQ and blog for any relevant entries or release notes.
- If an exception occurs when executing a command, I executed it again in debug mode (
-vvvoption) and have included the output below.
Issue
My dependency configuration looks like this:
[tool.poetry.dependencies]
python = "~3.10"
numpy = { version = "^1.23.2", source = "pypi" }
torch = {version = "1.12.1", source = "torch", python="~3.10", markers="python_version~='3.10' and sys_platform=='linux'"}
torchvision = { version = "*", source = "torch", python="~3.10"}
[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true
…yet poetry lock attempts to download older versions and other platforms, too. Like:
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp38-cp38-win_amd64.whl
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp39-cp39-linux_x86_64.whl
I tried this with and without marker definition, with and without python= definition within the package definition for torch and torchvision but the result was always the same.
This seems to be related to https://github.com/python-poetry/poetry/pull/4958 for me but in this thread the PR was merged and it should work as expected, as far as I understand.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
poetry - githubmemory
Mechanism to reinstall (editable) dependencies to trigger metadata re-build ... Partial metadata gathering from legacy sources based on markers/constraints.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

That is correct, though it is worth noting that Poetry has not grown PEP 658 support yet as there are no real-world implementors. PyPI gaining support would likely be the impetus for Poetry to support this as we would have a widely used and battle-tested version to validate Poetry against.
That being said, nothing stops you from creating fixtures based on the current PyPI fixtures (as a sort of speculation based on the spec as to what the ‘real’ implementation will look like) and implementing/testing PEP 658 in Poetry ‘ahead of time’ 😄
Also yes, Poetry is directly being referred to in that paragraph, though I think PDM has similar design-driven requirements for metadata.
Poetry will download every distfile from a legacy package source in order to gather metadata – even if that package will not necessarily be installed due to markers. This is related to the nature of the solver and of the PEP 508 API – essentially, Poetry will not currently exclude metadata (like hashes) based on markers as packages are gathered before markers are even considered, and releases in the
poetry.lockfile are meant to be complete and represent the release as stored in the index.I’m going to ping @dimbleby and @radoering to get their opinion on whether pushing markers to the repository stage is viable – however, I don’t think it is trivial.
Looking closely at your constraints, you are trying to do what #4956 proposes to implement. If/when that is finished, it will be a better path for you in the long term. It’s also worth nothing that PEP 658 and PEP 691, when implemented together by third-party indexes, will solve this by removing the need to download files to gather metadata/hashes, letting us be as performant with third-party sources as PyPI.