-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't install e.g. new packaging
and CUDA version of torch simultaneously
#2683
Comments
I have figured out a workaround here. I can install
This leads to
which allows for any |
Yeah, just confirming that your summary is correct, and this is somewhat unsolved beyond the workaround you described in your second post. In the next version of uv, you shouldn't need the override file, since #2624 will correctly respect the local version specifier from the URL. So, you'll still need to use the direct wheel URLs, but you won't need the override. |
The next version of uv will include an opt-in flag to allow this: #2815 |
This is related to the "Packages that exist on multiple indexes" in the README, to this closed ticket, and to the idea of pinning packages to repositories.
The pip compatibility readme says "in most cases, swapping out pip install for uv pip install should 'just work'" unless you "stray from common pip workflows"; I opened this as a new ticket because it's a specific example of not being able to get
uv
to work at all with arequirements.in
that works with pip, and my impression is that installing a CUDA version of torch is a reasonably common workflow.The issue I am facing is that
torch
specifies a bunch of packages in its CUDAextra-index-url
, and you can also install vanillatorch
from pip. So there's no way as far as I can see to install a new version of any of the packages here while also installingtorch==2.1.2+cu118
. Or if there is, please let me know!Newest
uv
release:Install python-can in a venv. It depends on packaging>=23.1:
Install the CUDA version of torch, which requires this extra-index-url:
Now try to compile both of these in a requirements file:
It doesn't work with the extra-index-url, because the pytorch url provides only packaging==22.0:
It doesn't work with the other order either, because now we can't find our CUDA torch version:
The text was updated successfully, but these errors were encountered: