Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refresh CIDRs should follow up with adding iptables else traffic will be lost #1423

Closed
nithu0115 opened this issue Apr 9, 2021 · 0 comments
Assignees
Labels

Comments

@nithu0115
Copy link
Contributor

What happened: We are adding more secondary CIDRs to our VPC. We see AWS VPC CNI plugin adds correct IP rules/routes but it is missing iptables due to which traffic is being dropped.

Attach logs

source-ip-app-67bb5fbf99-qszv6   1/1     Running       0          12m     192.168.63.221    ip-192-168-48-37.us-west-2.compute.internal    <none>           <none> === eth2 

root@source-ip-app-67bb5fbf99-qszv6:/# curl 198.168.54.184
curl: (7) Failed to connect to 198.168.54.184 port 80: Connection timed out

nginx-6799fc88d8-8crh8           1/1     Running       0          5m3s    198.168.54.184    ip-198-168-8-134.us-west-2.compute.internal    <none>           <none> === eth2 

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
    link/ether 16:54:a0:0a:94:4f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 198.168.54.184/32 scope global eth0
       valid_lft forever preferred_lft forever

What you expected to happen: Traffic from Pod source-ip-app-67bb5fbf99-qszv6 to source Pod nginx-6799fc88d8-8crh8 should be routable.

How to reproduce it (as minimally and precisely as possible):

  1. Create EKS cluster in VPC with CIDR 192.168.0.0/16 with 2 worker nodes in 192.168.0.0/16 CIDR
  2. Create some nginx Pods on the 2 worker nodes created above. Make sure, there are Pods on secondary network interfaces.
  3. After some time add a secondary CIDR 100.10.0.0/16 and launch worker in 100.10.0.0/16 CIDR.
  4. Scale up above nginx deployment and ensure they land on above created worker node and on secondary ENIs of the worker nodes.
  5. Kubectl exec into a Pod created above and running on seconday ENI and curl to a Pod IP running on secondary ENI of the worker node created in step 1
  6. curl reach will timeout.

Anything else we need to know?: NO

Environment:

  • Kubernetes version (use kubectl version): 1.18
  • CNI Version - 1.7.9
  • OS (e.g: cat /etc/os-release): NAME="Amazon Linux
  • Kernel (e.g. uname -a): Linux 4.14.209-160.339.amzn2.x86_64 Initial commit of amazon-vpc-cni-k8s #1 SMP Wed Dec 16 22:44:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants