Implement non-caching, per-kustomization GC-client/statusPoller for cross-cluster kubeconfigs #135
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #127
First shot at this after familiarizing myself with all of the clientcmd and restclient types/methods.
Pending extensive testing /w CAPI.
Clients are re-created from the KubeConfig SecretRef for each reconciliation of a particular Kustomization.
( kubeconfigs such as the ones used for CAPA managed EKS clusters are regularly refreshed behind the scenes. )
I chose not to create a cache for these clients since they only survive a single reconciliation.
We could instead maintain a map of NamespacedNames to restClients in the KustomizationReconciler.
This might be worthwhile and fairly simple -- let me know what you think.
Currently, when not specifying a KubeConfig, we return the Reconciler's general client and statusPoller.
I expect to change this for security reasons in the future Impersonation patches, since HealthChecking
could be a form of cross-tenant information disclosure, and it could be possible to trick the Garbage Collector
into deleting things you'd otherwise not have access to.
I wonder how we can e2e test this?