RKE2 creating a Network Load Balancer on AWS through an Nginx ingress controller , creates automatically a new master node with a failing rke2 canal and a name ends with .internal which is the hostname internally for EC2 #4361
Replies: 1 comment
-
RKE2 comes with ingress-nginx already; are you disabling that before deploying an additional copy? Have you considered attempting to configure the existing ingress controller using a HelmChartConfig, instead of deploying a completely different chart from scratch? See: https://docs.rke2.io/helm#customizing-packaged-components-with-helmchartconfig
There's not anything in the RKE2 codebase that would cause it to create a duplicate node; rke2 is either crashing and restarting with an altered configuration (different hostname), or something else that you've deployed is creating a duplicate Node resource. If you can attach the rke2-server logs from journald, as well as the various logs from |
Beta Was this translation helpful? Give feedback.
-
Environmental Info:
RKE2 Version: 1.25
rke2 version v1.25.10+rke2r1
Node(s) CPU architecture, OS, and Version:
Linux master 5.14.0-284.11.1.el9_2.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
The workers are the same
Cluster Configuration:
1 master node , two workers , RKE2 runs on EC2 instances on AWS
Describe the bug:
I'm trying to deploy nginx ingress controller as a LoadBalancer which is needs to create a Network Load Balancer on AWS. I'm using the following HELM command :
helm install nlb-ingress ingress-nginx/ingress-nginx
--set controller.service.type="LoadBalancer"
--set controller.service.annotations."service.beta.kubernetes.io/aws-load-balancer-ssl-cert"="**********************************************************"
--set controller.service.annotations."service.beta.kubernetes.io/aws-load-balancer-backend-protocol"="tcp"
--set controller.service.annotations."service.beta.kubernetes.io/aws-load-balancer-ssl-ports"="443"
--set-string controller.config.use-forwarded-headers="true" --version 4.2.3
--set-string controller.ingressClass="nlb-ingress"
--set-string controller.ingressClassResource.name="nlb-ingress"
--set-string controller.service.annotations."service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled"="true"
--set controller.service.annotations."service.beta.kubernetes.io/aws-load-balancer-type"="nlb"
The main problem is after installing the HELM chart, the RKE2 server running on the master creates a new master node with the same name of the EC2 internal ip which is a DNS like this (ip-10-78-18.... .internal ) , now the problem is that a lot of pods are stuck in terminating and the rke2 canal pod for this new master node gives a message of "CrashLoopBackOff"
I have use the command below while installing , and the installed version is 1.25
Steps To Reproduce:
curl -sfL https://get.rke2.io | sh -
Expected behavior:
The expected behavior is to create an NLB without problems on the same master node without create a new one and to make the rke2 canal to work properly
Actual behavior:
Creating a new master node and the old one is in not ready state and the new master node (control plane) has a CrashLoopBackOff
state for the (rke2 canal- vkdmj ) pod
Additional context / logs:
These logs for the pod of rke2-canal for the newly created master node by rke2 :
Events:
Type Reason Age From Message
Warning BackOff 3m40s (x327 over 68m) kubelet Back-off restarting failed container
Beta Was this translation helpful? Give feedback.
All reactions