rke2 worker node not getting IP #4782
Replies: 3 comments 7 replies
-
When I look at the logs for the three kube-system pods,( kube-proxy, rke2-canal, rke2-ingress-nginx-controller), I get the same error: There are no events when I describe the pods |
Beta Was this translation helpful? Give feedback.
-
You didn't open this as an issue, so we don't have all the info that we would normally ask for via the issue template, so I have to ask - what versions did you upgrade from, and to?
What kind of backup did you restore from? Did you restore the entire OS, or just restore an etcd snapshot to the datastore? What versions of RKE2 are all of the nodes currently running? |
Beta Was this translation helpful? Give feedback.
-
Hi Brandon, Thank you! |
Beta Was this translation helpful? Give feedback.
-
I am running RKE2 in a HA environment. Three control-plane nodes and three worker nodes. I needed to patch the six servers so decided to run rke2 upgrade manually from the following link: https://docs.rke2.io/upgrade/manual_upgrade
The upgrade seemed to go fine with no errors. After everything was back up, my cluster was not accessible from outside the cluster via kubectl or rancher. I was getting the below error from the API so decided to restore from backup. I restored to backup and the cluster came back up(after backup, the kubernetes version stayed on latest version). Now though, one of my worker nodes will not get an IP so will not connect. Will you please point me in the right direction to troubleshoot. I also have the worker node cordoned off so no pods will be assigned to it.
Output from worker node:
Error from api before restore from backup:
Beta Was this translation helpful? Give feedback.
All reactions