-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(listens) support multiple addresses and IPv6 #986
Conversation
Was making it an opt-in setting (bind IPv6 only if explicitly asked for) an option? I'm not sure if that's a good direction to diverge from the defaults Kong uses. |
I share the same concern. Do we really want to flip this default for all users? Will that work on all machines, e.g. what if I have IPv6 disabled? Is that a noop for me then? Would it be possible to make this a proper chart value, accepting a list with the default having a single entry: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requesting changes just to block until we discuss making it configurable. Feel free to override if that's resolved
tl;dr
I'd like to avoid requiring a toggle to support something that should arguably work out of the box if it's available in the environment. Unfortunately absent ongoing user surveys we don't have a good way to float new features, so turning it on and seeing if it causes issues for someone is the less than ideal way to see if we need a toggle. Even with one I'd say it should default to IPv6 enabled, since it's only relevant if the wider environment supports it. If you have a dual-stack cluster but have somehow broken your networking such that only IPv4 actually works, you should probably disable IPv6 at the environment level rather than expecting applications to not try using IPv6. The controller lacking support for multiple listens de facto still requires changing defaults for v6-only clusters, but ideally we address that in the future. I don't want to formally expose It is extra template logic and extra config burden (especially if we do it "properly" and let you set it per-Service--I'd probably ignore that and expose a single setting/require you use the same setting for every Service) though, so if we can live without it, we save one more setting that <1% of installs will actually use.
Yeah. The only problem case is if you're using a single-stack listen for only the protocol your cluster doesn't support. More detail below, but from Kong's perspective its environment does support either address family. It can bind to IPv6 even if those container-internal addresses aren't reachable from outside. Having a link-local IPv6 address doesn't interfere with IPv4 traffic. For many dual-stack clusters we actually won't use IPv6 at the cluster level anyway. Kubernetes has a an apparently hard-coded default Service configuration that defaults to single-stack only based on the first IP range you configure. If you configure IPv4 addresses first, you'll actually only use IPv4, at least for Services, unless we do expose the family/stack Service settings. If you configure IPv6 first, you were presumably in a broken state similar to IPv6-only clusters, since the Service would effectively be IPv6-only and clients would only try to talk v6 and fail because there was no corresponding container listen. I assume these were probably uncommon in practice given the apparent absence of complaints about us not working on dual-stack clusters. Since we're using Service Endpoints for discovery, you'll only get the address family the Service supports even if the Pod has both families assigned (AFAICT this holds for both normal and headless Services equally). Most installs will still use IPv4 only even on dual-stack clusters. Testing discovery with a manually-configured dual stack admin Service does indicate that it's a bit boneheaded about this though. IIRC the code is just listing every endpoint and assuming every address it finds is a Pod (which is true in single stack), and the updates then fire twice. In a single proxy replica environment:
On an IPv4-only KIND instance, I'm using the (new) default dual-stack admin bind. The controller uses the IPv4 cluster IP because there's no v6 IP available and successfully talks to the admin API:
Going into more detail, TIL that containers do not inherit their host's network capabilities, which I guess makes sense. On the kubelet:
versus in the proxy container:
Can't see a whole lot further in the container since we lack most of the standard Linux network tools (and I can't figure it out from
In that sense it does probably make sense for upstream kong.conf.default to default to IPv4 only, since it may well end up being installed into a v4-only environment and would fail without overrides. In our case, however, we can rely on the container always supporting both and don't need to be as conservative. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was under impression that exposing this would be relatively simple hence my proposal. Basically, instead of "hardcoding" in templates, use a default that people will be able to change. Opt in or opt out (preferably the latter) but not hardcoded until it breaks things.
Given the investigation and results posted in this PR I'm fine with moving forward with this.
Configure default IPv6 listens for Kong listeners by default. The controller defaults are unchanged as the controller settings do not support multiple listen addresses. Replace the hidden .address service setting with a new hidden .addresses setting. The new setting is a list whereas the old setting was a single string. Honor address overrides for the admin listen. It previously forced you to the default 0.0.0.0 address or 127.0.0.1 when disabled.
5ed9691
to
75b6917
Compare
75b6917
to
e5c4c5f
Compare
Rebase to fix conflicts and added a release as well, since someone wanted the other thing that got merged. |
Thanks for making IPv6 easier to use. I'm hitting an issue at work on an IPv4 only cluster. It seems that the "container environment" does not set any IPv6 address in our pods networks namespaces. AFAIK, the network setup (lo included) is handled by the CNI, which, in our case, is explicitly told to not set IPv6. @rainest do you have a quote from a specification that says "the container environment always supports dual stack"? If that's not the case, I'd consider this default configuration a bug. |
What this PR does / why we need it:
Configure default IPv6 listens for Kong listeners by default. The controller defaults are unchanged as the controller settings do not support multiple listen addresses. Note that this diverges from upstream: kong.conf.default does not bind to IPv6.
Replace the hidden .address service setting with a new hidden .addresses setting. The new setting is a list whereas the old setting was a single string.
Honor address overrides for the admin listen. It previously forced you to the default 0.0.0.0 address or 127.0.0.1 when disabled.
Which issue this PR fixes
Fix #847.
Relevant to Kong/kubernetes-ingress-controller#5138.
Special notes for your reviewer:
See listen_tests.txt for example settings using
values.yaml.txt
Umbrella chart shenanigans mean we can't really check this in
ingress
easily. I stuffed the modifiedkong
chart intoingress.tar.gz. It requires a nightly to actually work because of kic#5138:
The old
address
field was never documented and AFAIK there shouldn't be much reason to use it (containers in Kubernetes shouldn't have reason to listen on anything other than all addresses or localhost only). The change toaddresses
is technically breaking but I'm considering it not so because it was effectively a developer-only setting.Checklist
main
branch.