Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buildx bake file doesn't support new ulimit directives #2241

Closed
ViliusS opened this issue Feb 5, 2024 · 11 comments · Fixed by #2242
Closed

buildx bake file doesn't support new ulimit directives #2241

ViliusS opened this issue Feb 5, 2024 · 11 comments · Fixed by #2242

Comments

@ViliusS
Copy link

ViliusS commented Feb 5, 2024

Description

Not sure if this is a bug, or just feature oversight.

compose-spec/compose-spec#433 implemented support for ulimit directives in Docker Compose file specification, however if I try to use new directives in BuildX Bake file it produces an error.

services:
  mydefaultbuild:
    build:
      ulimits:
        nofile:
          soft: 65536
          hard: 65536
[] docker buildx bake
[+] Building 0.0s (1/1) FINISHED                                                                         docker:default
 => [internal] load local bake definitions                                                                         0.0s
 => => reading docker-compose.yaml 139B / 139B                                                                     0.0s
ERROR: validating : services.mydefaultbuild.build Additional property ulimits is not allowed
@crazy-max
Copy link
Member

This is not yet supported, we need to think of proper way to implement this with bake, see #790 (comment). It might also make more sense to have this attr in the dockerfile.

@crazy-max
Copy link
Member

crazy-max commented Feb 5, 2024

ERROR: validating : services.mydefaultbuild.build Additional property ulimits is not allowed

Oh actually this error comes from compose file validation in bake, that should be solved with #2205 but still ulimit will not be handled in the current bake implementation.

@ViliusS
Copy link
Author

ViliusS commented Feb 5, 2024

Thank you for fast response.

I don't think ulimit belongs to Dockerfile. I see it as an environment specific directive which is more suitable for compose or hcl file instead, to have it propertly working with Compose Profiles, etc. You can check docker/compose#11167 for initial reasoning for ulimit support in compose file.

For now I'm using YAML file with docker compose build, but it would be great to have a proper bake support.

@crazy-max
Copy link
Member

crazy-max commented Feb 8, 2024

Opened #2242 if you want to try it:

$ docker buildx bake --set *.output=/tmp https:/crazy-max/buildx.git#bake-ulimits-shmsize
$ /tmp/buildx bake

@thaJeztah
Copy link
Member

LOL, saw this ticket, and thought it was not yet supported by Buildkit, so went looking at my epic (moby/moby#40379), but found that this was implemented at some point for BuildKit

It might also make more sense to have this attr in the dockerfile.

I don't think ulimit belongs to Dockerfile.

Also on the fence if this would make sense in the Dockerfile itself 🤔 I guess the "upside" would be to allow defining that a specific stage requires specific limits, but OTOH that would imply that the Dockerfile would assume it could escalate / gain additional resources (likely not something desirable), besides likely not possible if it tries to gain higher limits than available? (although in that case, if could potentially give a more specific error that the Dockerfile requires certain limits)

@ViliusS
Copy link
Author

ViliusS commented Feb 9, 2024

Opened #2242 if you want to try it:

$ docker buildx bake --set *.output=/tmp https:/crazy-max/buildx.git#bake-ulimits-shmsize
$ /tmp/buildx bake

Hi, I can confirm that setting nofile in docker-compose.yaml and running through your buildx executable works!

One caveat though! It looks like nor passing --ulimit to the docker build command, nor using them in compose file works on Docker Engine >= 25.0 for me anymore. It only works on 24 version and lower. I didn't have time to fully verify this yet, but it could be do these changes in the engine.

EDIT: by "no longer works for me on 25.0" I have meant that I cannot verify if it works or not, because my use case specifically requires infinity or very large nofile numbers to fail the build. As I understood 25.0 now caps this limit to 524288 and no matter what I could not increase my test sample configuration pass that limit.

@crazy-max
Copy link
Member

crazy-max commented Feb 9, 2024

One caveat though! It looks like nor passing --ulimit to the docker build command, nor using them in compose file works on Docker Engine >= 25.0 for me anymore. It only works on 24 version and lower. I didn't have time to fully verify this yet, but it could be do these changes in the engine.

EDIT: by "no longer works for me on 25.0" I have meant that I cannot verify if it works or not, because my use case specifically requires infinity or very large nofile numbers to fail the build. As I understood 25.0 now caps this limit to 524288 and no matter what I could not increase my test sample configuration pass that limit.

Interesting, can you try with a container builder?

$ docker buildx create --name test
$ /tmp/buildx --builder test bake

@crazy-max
Copy link
Member

crazy-max commented Feb 9, 2024

I tried on Docker 25 with a simple Dockerfile and seems to work for me:

FROM busybox
RUN ulimit -n
$ docker version
Client:
 Cloud integration: v1.0.35+desktop.10
 Version:           25.0.3
 API version:       1.44
 Go version:        go1.21.6
 Git commit:        4debf41
 Built:             Tue Feb  6 21:13:00 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Desktop
 Engine:
  Version:          25.0.3
  API version:      1.44 (minimum version 1.24)
  Go version:       go1.21.6
  Git commit:       f417435
  Built:            Tue Feb  6 21:14:25 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.28
  GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
$ docker buildx build --ulimit "nofile=1024:1024" .
#0 building with "default" instance using docker driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 64B done
#1 DONE 0.1s

#2 [internal] load metadata for docker.io/library/busybox:latest
#2 DONE 0.0s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

#4 [1/2] FROM docker.io/library/busybox:latest
#4 DONE 0.0s

#5 [2/2] RUN ulimit -n
#5 0.307 1024
#5 DONE 0.3s

#6 exporting to image
#6 exporting layers 0.1s done
#6 writing image sha256:04088e084b1c323bae0eb07d74d3b6fb4114cae6ebbb17eceffb4070f9d98d33 done
#6 DONE 0.1s
$ docker buildx build .
#0 building with "default" instance using docker driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 64B done
#1 DONE 0.0s

#2 [internal] load metadata for docker.io/library/busybox:latest
#2 DONE 0.0s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

#4 [1/2] FROM docker.io/library/busybox:latest
#4 CACHED

#5 [2/2] RUN ulimit -n
#5 0.271 1048576
#5 DONE 0.3s

#6 exporting to image
#6 exporting layers 0.1s done
#6 writing image sha256:f9791330b007ecafdcd84dc0c7a1afb37878fe9c0fa18e703af871ff9630dd9c done
#6 DONE 0.1s

@ViliusS
Copy link
Author

ViliusS commented Feb 9, 2024

OK, I've bean able to verify this on 25.0 too. It works 👍 I've set LimitNOFILE=infinity in /usr/lib/systemd/system/docker.service and this allowed to set ulimit through bake script past 524288.

Given that ulimit parameter is now capped by default it looses some usage point (at least for our use case where we specifically want to cap it).

I think at least ulimit parameter documentation needs to be updated everywhere as now it silently allows very large values, but they do not work:

[]# docker buildx build --ulimit "nofile=10240000:10240000" --progress=plain --no-cache .
#0 building with "default" instance using docker driver

#8 [4/7] RUN ulimit -a
#8 0.146 time(seconds)        unlimited
#8 0.146 file(blocks)         unlimited
#8 0.146 data(kbytes)         unlimited
#8 0.146 stack(kbytes)        8192
#8 0.146 coredump(blocks)     unlimited
#8 0.146 memory(kbytes)       unlimited
#8 0.146 locked memory(kbytes) 8192
#8 0.146 process              unlimited
#8 0.146 nofiles              1024
#8 0.146 vmemory(kbytes)      unlimited
#8 0.146 locks                unlimited
#8 0.146 rtprio               0
#8 DONE 0.2s

@polarathene
Copy link

Given that ulimit parameter is now capped by default it looses some usage point (at least for our use case where we specifically want to cap it).

It isn't capped by Docker anymore, just inherits the default from the system which will typically be 1024:524288. You can provide a drop-in unit override config like you discovered to raise the hard limit higher when that's necessary. That can be for the particular service like Docker or system-wide (provided a service doesn't override it that, which was the case previously with Docker).


I think at least ulimit parameter documentation needs to be updated everywhere as now it silently allows very large values, but they do not work

Looks like that shouldn't silently happen anymore and you'd get a failure. Had a quick glance over the docs but doesn't appear to mention any caveats / advice to raise the hard limit beyond the system-wide default.

Out of curiosity, what were you building that needed more than 524288 file descriptors? Or was the issue with the soft limit being 1024? (some software that requires a higher limit doesn't request it internally even though it should, so you need to raise it manually if 1024 is not enough for a process)

@ViliusS
Copy link
Author

ViliusS commented Mar 12, 2024

Given that ulimit parameter is now capped by default it looses some usage point (at least for our use case where we specifically want to cap it).

It isn't capped by Docker anymore, just inherits the default from the system which will typically be 1024:524288. You can provide a drop-in unit override config like you discovered to raise the hard limit higher when that's necessary. That can be for the particular service like Docker or system-wide (provided a service doesn't override it that, which was the case previously with Docker).

It's probably a matter of what system default was there before Docker changes. At least for me, on RedHat Enterprise Linux, root user had nofile set to Infinity. And since Docker Engine runs as root by default, from my perspetive nofile is now capped.

I think at least ulimit parameter documentation needs to be updated everywhere as now it silently allows very large values, but they do not work

Looks like that shouldn't silently happen anymore and you'd get a failure. Had a quick glance over the docs but doesn't appear to mention any caveats / advice to raise the hard limit beyond the system-wide default.

Out of curiosity, what were you building that needed more than 524288 file descriptors? Or was the issue with the soft limit being 1024? (some software that requires a higher limit doesn't request it internally even though it should, so you need to raise it manually if 1024 is not enough for a process)

I'm building some ancient database (SQL Anywhere from year 1999) Docker image. I'm not sure how their built their install and other cli utilities, but I found that they enumerate all possible file descriptors in the system every time I execute these utilities in a Dockerfile. On my environment that's Infinity and takes ~2 minutes to complete. Capping nofile during build time in Docker Compose or HCL file solved this. As mentioned this is no longer needed for my use case as it is now capped in Docker Engine itself. Other cases may be still valid though.

Anyway, as you can see from my previous comment, if one needs to actually increase nofile past the engine limit that would just silently won't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants