question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Backend-dependent PyTorch versions

See original GitHub issue
  • Poetry version: 1.2.0

  • Python version: 3.10.6

  • OS version and name: Arch Linux

  • I have searched the issues of this repo and believe that this is not a duplicate.

  • I have consulted the FAQ and blog for any relevant entries or release notes.

Issue

Hi!

I do deep learning and am currently trying to switch to poetry for better dependencies management 😄 I immediatly encountered a problem when trying to install PyTorch:

  1. PyTorch versions are backend-dependent, so the latest PyTorch version has releases for say 20 different CUDA versions, and constraining everyone cloning the project to the same CUDA version makes no sense, and I guess Poetry cannot currently handle this
  2. PyTorch is not on PyPi, more on that below.

In order to solve the first problem (even if an official support from Poetry would be appreciated), I went with the famous light-the-torch, that automatically installs the right PyTorch version depending on the detected backend. The problem with this is that torch is not added to pyproject.toml afterwards, so if I do a subsequent poetry install XXX, the torch package has a lot of chances of being replaced by the torch from pip.

Also, specifying the PyTorch url in the .toml is not a solution again since it is dependent on the local backend. In a PyTorch project, the common factor is the PyTorch package version, not the backend on which it runs as the latter just ensures the project can run on a variety of configurations.

This means that Poetry is not currently compatible with PyTorch. I don’t think that saying that PyTorch is a special case is a good idea since this release design just exposes the need to compile a package for each particular software stack, so it is really a general problem which must be solved IMO.

I’ll be happy to discuss below of how Poetry must adapt to this design!

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:19 (10 by maintainers)

github_iconTop GitHub Comments

1reaction
neersightedcommented, Oct 2, 2022

In the same spirit, what if I want to install a custom built PyTorch version?

You can do this today with URL dependencies and markers (but, as markers do not include any facility to discriminate based on ML API, this doesn’t solve anything you couldn’t do already with the pytorch indexes).

0reactions
neersightedcommented, Oct 2, 2022

If the wheel wasn’t installed by Poetry, it may be missing a PEP 610 marker, aka direct_url.json. That is to say, Poetry will only consider it the same torch version and not reinstall it if the marker exists and matches the URL that Poetry was configured with.

This is getting fairly off topic and turning into more of a support discussion (and I think the original issue was more of question than anything actionable anyway) – I’m migrating this to Discussions as such.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Previous PyTorch Versions
Installing previous versions of PyTorch. We'd prefer you install the latest version, but old binaries and installation instructions are provided below for your ......
Read more >
Tensor Shape Annotations Library (tsalib) - PyPI
work with arbitrary backends without changes: numpy , pytorch , keras , tensorflow , mxnet , etc. Exposing the invisible named dimensions enhances...
Read more >
onnx/Lobby - Gitter
Hi, I was trying convert an onnx model - IR version:4(generated from a Keras model) to IR version:3 since onnx.js supports only IR...
Read more >
Source code for deepxde.optimizers.config
LBFGS <https://pytorch.org/docs/stable/generated/torch.optim. ... set_LBFGS_options() # Backend-dependent options if backend_name == "pytorch": # number of ...
Read more >
dgl.DGLGraph.adj — DGL 0.8.2post1 documentation
Get a backend dependent sparse tensor. Here we use PyTorch for example. >>> g.adj(etype='develops') tensor(indices=tensor([[0, 1], [0, 2]]), ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found