Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize local base image support: more efficient layer processing #1913

Closed
3 tasks done
TadCordle opened this issue Aug 16, 2019 · 4 comments · Fixed by #2144
Closed
3 tasks done

Optimize local base image support: more efficient layer processing #1913

TadCordle opened this issue Aug 16, 2019 · 4 comments · Fixed by #2144
Milestone

Comments

@TadCordle
Copy link
Contributor

TadCordle commented Aug 16, 2019

Following up #1906.

Old ideas that we don't want to do anymore:

  • When extracting the tar, process each entry without writing them to disk (conflicts with caching mechanism)
  • Instead of computing the layer digest and then pushing to a registry, push base image layers while compressing on-the-fly (Add ExtractTarStep and SaveDockerStep #1906 (comment)) (conflicts with caching mechanism)
  • Untar docker save output directly instead of writing to disk first (i.e. docker save | tar -xf -) (Doesn't seem to save time, and hurts code readability)
  • Use BEST_SPEED while gzipping instead of the default compression method (faster compression, but slows down push steps since blobs are larger)
@TadCordle
Copy link
Contributor Author

TadCordle commented Nov 11, 2019

For the record, I tried "Untar docker save output directly instead of writing to disk first", and it doesn't seem to save much time, if any (my initial experiment seemed to have saved time, but I can't reproduce it, even after clearing Jib's cache). It makes the code harder to read as well, since DockerClient, TarExtractor, and LocalBaseImageSteps need to be more tightly connected, so I think we can skip this one.

@TadCordle
Copy link
Contributor Author

I also just tried using BEST_SPEED for layer compression. The results are sort of what I expected; the layer compression executes faster, so stuff like jibDockerBuild and jibBuildTar seems to be slightly faster overall. But the faster compression also results in larger blobs, which means pushing to a registry is slower. In that case this seems like it's not worth it since it will actually cause a slowdown for pushing to registries, especially for users with weaker connections.

@loosebazooka
Copy link
Member

Should we make it configurable?

@TadCordle
Copy link
Contributor Author

TadCordle commented Nov 12, 2019

I don't think it's worth adding an extra configuration for it. The speed difference is pretty small anyway, and it's only when the image hasn't been cached yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants