Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom venv locations for uv sync #5229

Closed
andriygm opened this issue Jul 19, 2024 · 26 comments · Fixed by #6834
Closed

Support custom venv locations for uv sync #5229

andriygm opened this issue Jul 19, 2024 · 26 comments · Fixed by #6834
Assignees
Labels
needs-decision Undecided if this should be done projects Related to project management capabilities

Comments

@andriygm
Copy link

uv pip install supports the VIRTUAL_ENV environment variable, allowing package installation into venvs not located in the same directory as the project. uv sync would also benefit from being able to install packages into external venvs (perhaps as a flag?)

Since uv sync also creates venvs if they don't exist, it should do so in the custom location.

@andriygm andriygm changed the title Add support for custom venv locations for uv sync Support custom venv locations for uv sync Jul 19, 2024
@zanieb
Copy link
Member

zanieb commented Jul 19, 2024

Related astral-sh/rye#1211

@zanieb zanieb added projects Related to project management capabilities needs-decision Undecided if this should be done labels Jul 19, 2024
@nazq
Copy link

nazq commented Jul 20, 2024

Related astral-sh/rye#1211

÷1 and thanks for linking the related rye enhancement

@charliermarsh charliermarsh added the preview Experimental behavior label Aug 5, 2024
@zanieb
Copy link
Member

zanieb commented Aug 5, 2024

@charliermarsh I don't think this should be in scope for the first round of stabilizations.

@zanieb zanieb removed the preview Experimental behavior label Aug 20, 2024
@tiangolo
Copy link
Contributor

Here's use case I have where it would be useful. I was trying to migrate https:/fastapi/full-stack-fastapi-template/ to uv.

I mount the local development directory inside the Docker container as a volume: https:/fastapi/full-stack-fastapi-template/blob/master/docker-compose.override.yml#L58

This allows fast iteration on the code, as it only takes a reload of the server instead of a re-build of the Docker image to try a change.

If I have a local .venv directory, it will end up mounted as well. But inside the container, the file links are not resolved.

I tried using UV_SYSTEM_PYTHON and VIRTUAL_ENV to move it to some root directory (so that it wouldn't be mounted), but uv would still try to use the venv in .venv. And if I remove it from inside the container, the in-container uv would create a new venv in .venv that would also show up in the local (mounted directory).


Somewhat related, I would like to be able to use Python directly, not only through uv run. For example, in that project, I currently use a base Docker image that comes with its own entrypoint and command. But those wouldn't be able to run the programs without activating the uv venv in some way (or using the system Python).

@charliermarsh
Copy link
Member

Makes a ton of sense, thanks @tiangolo. I think this is pretty high-priority now that the release is out.

Somewhat related, I would like to be able to use Python directly, not only through uv run.

Here, you're referring to the installed Python, like outside of the virtualenv, is that right? (Since you can always source .venv/bin/activate without going through uv run.)

@tiangolo
Copy link
Contributor

Makes a ton of sense, thanks @tiangolo. I think this is pretty high-priority now that the release is out.

🚀 🎉

Here, you're referring to the installed Python, like outside of the virtualenv, is that right? (Since you can always source .venv/bin/activate without going through uv run.)

Yep, this part was a misunderstanding and mistake from my side. 😅

I was misinterpreting the docs. And then I was trying something that wouldn't work.

But this worked:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10
# Install uv
COPY --from=ghcr.io/astral-sh/uv:0.3.0 /uv /bin/uv

WORKDIR /app/

ENV VIRTUAL_ENV=/app/.venv
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

# Maybe copy lock file in case it doesn't exist in the repo
COPY ./pyproject.toml ./uv.lock* /app/

RUN uv sync

RUN . $VIRTUAL_ENV/bin/activate

Note: it seems the build steps with RUN are run through sh, not bash, so the source command is not available, only the . syntax (. .venv/bin/activate).

full Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10
# Install uv
COPY --from=ghcr.io/astral-sh/uv:0.3.0 /uv /bin/uv

WORKDIR /app/

ENV VIRTUAL_ENV=/app/.venv
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

# Maybe copy lock file in case it doesn't exist in the repo
COPY ./pyproject.toml ./uv.lock* /app/

# Allow installing dev dependencies to run tests
ARG INSTALL_DEV=false

RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then uv sync ; else uv sync --no-dev ; fi"

RUN . $VIRTUAL_ENV/bin/activate

ENV PYTHONPATH=/app

COPY ./scripts/ /app/

COPY ./alembic.ini /app/

COPY ./prestart.sh /app/

COPY ./tests-start.sh /app/

COPY ./app /app/app

@lukewiwa
Copy link

lukewiwa commented Aug 21, 2024

To add to this. Being able to configure the .venv location will also help out when using docker bind mounts during the build phase. https://docs.docker.com/build/building/best-practices/#add-or-copy. Currently this is impossible in uv since the bind mount is read only and uv will try and write to the mount and error out.

eg

FROM python:3.12

COPY --from=ghcr.io/astral-sh/uv:0.3.0 /uv /usr/local/bin/uv

# Assuming a python project in the directory `project_name` with a `pyproject.toml` file
ENV UV_SYSTEM_PYTHON=true VIRTUAL_ENV=/opt/.venv
RUN --mount=type=bind,source=project_name,target=/project_name \
  uv sync

Which will result in the following error

error: failed to create directory `/project_name/.venv`
Caused by: Read-only file system (os error 30)

@zanieb
Copy link
Member

zanieb commented Aug 21, 2024

@tiangolo just a heads up that providing a python shim is tracked in #6265

Note we also do cover some of this in our Docker integration guide but clearly there's room for improvement in the Docker integration story.

@tiangolo
Copy link
Contributor

Thanks @zanieb! I think what I needed would be covered in some way by activating the environment. I would like to be able to UV_SYSTEM_PYTHON=true, but at least putting the virtual environment in another location would work.


I personally actually don't like shims, I can't use which python with shims, that's the main reason I ended up not using other tools that used/needed shims. 😅


About Docker, yep! Actually great docs! That's what I was basing my work on. Maybe the detail of RUN . $VIRTUAL_ENV/bin/activate could be added, don't know. 🤔

But anyway, great job on the docs already. 👏

@sbidoul
Copy link

sbidoul commented Aug 21, 2024

Related, re sync in dockerfile: #4028 (comment)

@adiberk
Copy link

adiberk commented Aug 22, 2024

Makes a ton of sense, thanks @tiangolo. I think this is pretty high-priority now that the release is out.

🚀 🎉

Here, you're referring to the installed Python, like outside of the virtualenv, is that right? (Since you can always source .venv/bin/activate without going through uv run.)

Yep, this part was a misunderstanding and mistake from my side. 😅

I was misinterpreting the docs. And then I was trying something that wouldn't work.

But this worked:

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10
# Install uv
COPY --from=ghcr.io/astral-sh/uv:0.3.0 /uv /bin/uv

WORKDIR /app/

ENV VIRTUAL_ENV=/app/.venv
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

# Maybe copy lock file in case it doesn't exist in the repo
COPY ./pyproject.toml ./uv.lock* /app/

RUN uv sync

RUN . $VIRTUAL_ENV/bin/activate

Note: it seems the build steps with RUN are run through sh, not bash, so the source command is not available, only the . syntax (. .venv/bin/activate).

full Dockerfile

@tiangolo am actually having a similar issue except we install to the system python in the Dockerfile - so I have maade a similar request to allow disabling force install into virtual env.. Here is link
However doing RUN . $VIRTUAL_ENV/bin/activate doesn't seem to help in my case
When I run the the docker-compose command I get an error /usr/local/bin/python: No module named ... which would indicate env isn't activated

(Side note - absolutely love fastapi)

Happy it is working for you!

Also love the project team!

@zanieb
Copy link
Member

zanieb commented Aug 22, 2024

@adiberk — yeah that won't work. See our documentation for a recommendation on how to activate the environment properly. tldr;

# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"

If you're having problems with that we're happy to help.

@adiberk
Copy link

adiberk commented Aug 22, 2024

@zanieb Thank you! I figured out a way in the end
If I do the uv sync and venv activation inside a different workdir like /opt
as well as set the python path to the venv that is in /opt, it works exactly as it should. (I can then go to /app workdir and copy all the code and run everything successfully)

I assume my issue is that the virtual env is getting messed up by the way we mount our volume as well as copy our folders in the Dockerfile (one or the other)

@zanieb
Copy link
Member

zanieb commented Aug 22, 2024

Does adding the .venv to your project's .dockerignore fix that too?

@adiberk
Copy link

adiberk commented Aug 25, 2024

Does adding the .venv to your project's .dockerignore fix that too?

It didn’t in my case.

@hauntsaninja
Copy link
Contributor

Similar to #1495 (comment) , you can symlink to solve this.

uv venv /app/.venv
ln -sf /app/.venv .venv
uv sync

@jklaiho
Copy link

jklaiho commented Aug 26, 2024

Just to chime in with our use case: currently, we use pip-tools in our container images, where we have VIRTUAL_ENV=/opt/venv and PATH=/opt/venv/bin:$PATH. We have an intricate multi-stage Dockerfile that populates /opt/venv in an early build stage using pip-sync on a compiled requirement file, and it's COPY'd wholesale to the actual runtime stage with no build stage dependencies present. Would be nice to run uv sync instead of pip-sync in the build stage to achieve the same, migrate from a compiled requirement file to uv.lock, and change nothing else about how our system currently works.

@zanieb zanieb self-assigned this Aug 26, 2024
@zanieb
Copy link
Member

zanieb commented Aug 26, 2024

Does adding the .venv to your project's .dockerignore fix that too?

It didn’t in my case.

Can you elaborate here? It still mounted .venv into the container?

@adiberk
Copy link

adiberk commented Aug 27, 2024

I can retest using your suggestion, however I recall in my original tests (before these comments) I had actually done that (with dockerignore) and it didn’t help. Post these comments, I simply tested by removing the .venv folder from my local and tested the docker setup suggested and still had issues.

From what I can tell, this can be because of how we copy the data to docker and the volumes mounted. And so this might be specific for my project.

The only thing that worked for me was running uv sync with my lockfile in a different workdir. And then setting env in my path.

@zanieb
Copy link
Member

zanieb commented Aug 27, 2024

@adiberk I explored this and .dockerignore only applies to build so you need to include another volume clause if you're doing a bind mount of your project at runtime e.g. --volume .:/app --volume /app/.venv

See the new documentation for details.

If you share a complete minimal example, I'm happy to help.

@edmorley
Copy link
Contributor

edmorley commented Aug 27, 2024

Another use-case for having a custom venv location outside of the project directory is CNBs (Cloud Native Buildpacks).

With CNBs, the layer/caching model is different to Dockerfile, so standard Dockerfile best practices don't apply. For example, with CNBs:

  • buildpacks aren't run as root
  • layers are created as directories under /layers/<buildpack_name>/<layer_name> which are separate to each other, and also separate to the app source code
  • layers can be marked with various properties that control whether they are cached or made available in the final run-image or not

These properties allow for several benefits such as base image "rebase", finer grained cache re-use/invalidation, smaller run-images (via excluding build time only deps), the ability to support multiple languages/ecosystems (Docker multi-stage builds get messy fast when trying to build a multi-language app, since it's hard to re-use official Docker images etc).

However, they do mean that the various components have to be installed into separate directory trees (rather than relying on an overlay filesystem to keep files contributed by different stages of the build separate).

For example when using pip a possible layout that a Python buildpack might use, could be:

  • /layers/<buildpack_name>/python:
    • A layer containing the custom relocated Python installation (compiled in advanced and downloaded from S3)
  • /layers/<buildpack_name>/pip:
    • A layer containing pip, as a user site-packages install (using PYTHONUSERBASE to set the location)
    • This layer can then be marked as launch=false to exclude it from the run image
  • /layers/<buildpack_name>/venv:
    • A layer containing the application dependencies in a PEP-405 compliant venv
    • PIP_PYTHON points at this layer, so pip install will install into it rather than the global Python installation
  • /workspace (the location of which is end-user customisable, and often changed to eg /app)
    • The layer containing the app source code

Each layer can then have its own cache invalidation logic. Any layers that are unchanged don't have to be rebuilt or even pushed to the remote. More at:
https://buildpacks.io/docs/for-buildpack-authors/concepts/
https:/buildpacks/spec/blob/main/buildpack.md

Use of a venv for the app dependencies is needed due to:

  1. A system install not allowing keeping the dependencies in a separate layer for separate cache invalidation/re-use
  2. --user installs being problematic for relocated Python (xref Support --user flag from pip #2077 (comment))
  3. PYTHONPATH tricks being problematic due to stdlib shadowing (paths on PYTHONPATH take precedence over the stdlib, which can cause hard to debug issues from eg outdated backport packages in the app's transitive dependency tree)

As such, when we add support for uv in the future, we'll need a way to force the venv to be created outside the project directory and in its own layer.

I'm mostly indifferent as to whether we have to create the venv ourselves or whether uv does that for us (so long as we can control the exact path uv creates; a generated venv path name derived from the project path/metadata like Poetry does would be more hassle). If we manually create the venv ourselves, I also don't mind too much how we'd tell uv to use that venv - for pip we'll soon be using PIP_PYTHON (since pip doesn't check VIRTUAL_ENV), and for the upcoming Poetry support we instead set VIRTUAL_ENV.

@adiberk
Copy link

adiberk commented Aug 27, 2024

@zanieb The updated guide is pretty thorough and helpful! I was actually able to get it working using a watch config which I guess is fine though it seems inconsistent sometimes and may not work as well as gunicorns hot reload setup!

I will say that I did try using the new setup without watch config and trying to mount .:/app but the app failed due to loosing my env it seems.

Regardless - really appreciate your help in this!

@zanieb
Copy link
Member

zanieb commented Aug 27, 2024

@edmorley thank you for your thorough response. I take that feedback pretty seriously and it makes me think we'll need to expose some way to allow custom virtual environment locations. The difficulty will be in encouraging users that would be better suited not customizing the location to stay on the happy path.

@adiberk I'm glad it helped! What's inconsistent about the watch setup? What happened with the mount? Feel free to open a targeted issue if something from the documentation didn't work out and we can chat over there.

@hynek
Copy link
Contributor

hynek commented Aug 28, 2024

For those who care about fast multi-stage builds, I've written down my own workflow here: https://hynek.me/articles/docker-uv/

tl;dr the lack of this feature in my context is awkward but easy to work around.

zanieb added a commit that referenced this issue Sep 3, 2024
…ONMENT` (#6834)

Allows configuration of the (currently hard-coded) path to the virtual
environment in projects using the `UV_PROJECT_ENVIRONMENT` environment
variable.

If empty, we'll ignore it. If a relative path, it will be resolved
relative to the workspace root. If an absolute path, we'll use that.

This feature targets use in Docker images and CI. The variable is
intended to be set once in an isolated system and used for all uv
operations.

We do not expose a CLI option or configuration file setting — we may
pursue those later but I see them as lower priority. I think a
system-level environment variable addresses the most pressing use-cases
here.

This doesn't special-case the system environment. Which means that you
can use this to write to the system Python environment. I would
generally strongly recommend against doing so. The insightful comment
from @edmorley at
#5229 (comment)
provides some context on why. More generally, `uv sync` will remove
packages from the environment by default. This means that if the system
environment contains any packages relevant to the operation of the
system (that are not dependencies of your project), `uv sync` will break
it. I'd only use this in Docker or CI, if anywhere. Virtual environments
have lots of benefits, and it's only [one line to "activate"
them](https://docs.astral.sh/uv/guides/integration/docker/#using-the-environment).

If you are considering using this feature to use Docker bind mounts for
developing in containers, I would highly recommend reading our [Docker
container development
documentation](https://docs.astral.sh/uv/guides/integration/docker/#developing-in-a-container)
first. If the solutions there do not work for you, please open an issue
describing your use-case and why.

We do not read `VIRTUAL_ENV` and do not have plans to at this time.
Reading `VIRTUAL_ENV` is high-risk, because users can easily leave an
environment active and use the uv project interface today. Reading
`VIRTUAL_ENV` would be a breaking change. Additionally, uv is
intentionally moving away from the concept of "active environments" and
I don't think syncing to an "active" environment is the right behavior
while managing projects. I plan to add a warning if `VIRTUAL_ENV` is
set, to avoid confusion in this area (see
#6864).

This does not directly enable centrally managed virtual environments. If
you set `UV_PROJECT_ENVIRONMENT` to an absolute path and use it across
multiple projects, they will clobber each other's environments. However,
you could use this with something like `direnv` to achieve "centrally
managed" environments. I intend to build a prototype of this eventually.
See #1495 for more details on this use-case.

Lots of discussion about this feature in:

- astral-sh/rye#371
- astral-sh/rye#1222
- astral-sh/rye#1211
- #5229
- #6669
- #6612

Follow-ups:

- #6835 
- #6864
- Document this in the project concept documentation (can probably
re-use some of this post)

Closes #6669
Closes #5229
Closes #6612
@tiangolo
Copy link
Contributor

I came back just to say: thanks for writing this guide! https://docs.astral.sh/uv/guides/integration/docker/#configuring-watch-with-docker-compose 🎉

I didn't know about Docker Compose's watch, that solves my problems even better than what I had (bind mounts). I learned something, thanks! 🤓 🚀

@zanieb
Copy link
Member

zanieb commented Sep 22, 2024

Great to hear! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-decision Undecided if this should be done projects Related to project management capabilities
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.