Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to enable garbage collector #68

Merged
merged 9 commits into from
Aug 9, 2022

Conversation

ddelange
Copy link
Contributor

@ddelange ddelange commented Aug 5, 2022

Additive version of #62

kuzaxak and others added 7 commits September 18, 2021 12:33
Cronjob should have the same env variables and volumes to proceed correctly
`nindent` function insert fist newline before actual content and allow you use it on any level of yaml without breaking visual structure.
In case of S3 we can use IAM role to access bucket, in that case we do not need to define anything in secrets. At the same time due to missed parent level field chart will fail with an error.
We don't have unlimited storage, and we would like to run built-in garbage collector to keep storage usage low.
…into garbage-collector

Includes moving additions to demployment.yaml to _helpers.tpl ref twuni/docker-registry.helm@v1.13.0...v2.1.0

* 'main' of https:/twuni/docker-registry.helm: (22 commits)
  Updated README with initContainers value
  Add initContainer support
  🏁 v2.1.0 Release
  Added support for autoscaling using hpa
  🏁 v2.0.1 Release
  🌐 make protocol selection work with istio
  fix(templates): Add checksum on secret.yaml file
  🏁 v2.0.0 Release
  Fix twuni#19 (new kubernetes API version)
  🏁 v1.16.0 Release
  Conditionally create service account and add to deployment
  🏁 v1.15.0 Release
  Support deployment to a namespace
  Updated typo to enable(d)
  🏁 v1.14.0 Release
  Enable metrics via Prometheus Operator
  Support additional env variables
  🏁 v1.13.2 Release
  Add support for S3 bucket to prefix all data
  🏁 v1.13.1 Release
  ...
values.yaml Outdated Show resolved Hide resolved
values.yaml Outdated Show resolved Hide resolved
values.yaml Outdated Show resolved Hide resolved
Copy link
Member

@canterberry canterberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pushing this through, @ddelange !

Overall looks great! All of my feedback is quite minor. I've noted anything non-blocking where relevant.

I invite anyone else to review as well, in case there's something we missed.

I cloned locally and ran helm template ... with a basic configuration as a smoke test (with and without --set garbageCollect.enabled=true). 👌

@ddelange
Copy link
Contributor Author

ddelange commented Aug 8, 2022

@canterberry thanks for the review, I've pushed the changes also to #62

values.yaml Outdated Show resolved Hide resolved
Copy link
Member

@canterberry canterberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second review pass complete @ddelange . One more tiny bit of feedback to address and then we're good to go! (documentation for persistence.deleteEnabled in README)

I cloned the branch locally and did a dry run of installing the chart to a local cluster:

$ helm upgrade --install --dry-run --namespace meta-namespace --set namespace=target-namespace --set garbageCollect.enabled=true registry --set serviceAccount.create=true --set priorityClassName=high .

Output looked fine to me and k8s found no issue with the resulting manifests.

The default value is documented in the README as `nil`, and where referenced, a falsey value is adequate.

Co-authored-by: ddelange <[email protected]>
Copy link
Member

@canterberry canterberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the link, @ddelange , showing where persistence.deleteEnabled is referenced. LGTM. I'll merge as soon as GitHub's checks are green.

@canterberry canterberry merged commit 171632a into twuni:main Aug 9, 2022
@canterberry
Copy link
Member

Thanks for the persistence, @ddelange , and @kuzaxak for getting this started! This has now been shipped in v2.2.0.

@ddelange
Copy link
Contributor Author

ddelange commented Aug 9, 2022

Awesome, thanks to you too for taking your time for reviewing and writing your comments the way you do. Very much appreciated, taking it with in my napsack!

@Raboo
Copy link

Raboo commented Aug 15, 2022

Will this garbage collector delete docker buildx cache?

@canterberry
Copy link
Member

Will this garbage collector delete docker buildx cache?

AFAIK, docker build and docker buildx caches are only on the local machine where the image(s) are built or pulled. The registry itself wouldn't have a cache, per se. There's more information on garbage collection in the official docs -- to my understanding, it's just deleting blobs that aren't being referenced anymore.

@Raboo
Copy link

Raboo commented Aug 16, 2022

@canterberry buildx caches to an registry using --cache-to (https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to). This is especially useful when you use automated pipelines where you don't have any local cache.

But I guess that the only way for me to find out is to try it, since it doesn't say anything about the cache in the garbage collection documentation.

@ddelange
Copy link
Contributor Author

This feature can be perfectly combined with type=registry caching. It was our initial reason for pushing the PR :) Let me explain:

When your builder pushes cache to the ref=bla tag for this image (e.g. type=registry,ref=buildcache), it overwrites the previous buildcache image tag that was already there. That previous image becomes untagged (can now only be fetched/referenced with @sha notation), and thats exactly the type of images the garbage collector will delete from the storage backend using the deleteUntagged option.

So with this feature enabled, your registry won't explode in size anymore when your workflows like to regularly overwrite existing tags.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants