Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add custom networking e2e test suite #1445

Merged
merged 4 commits into from
May 4, 2021
Merged

Conversation

abhipth
Copy link
Contributor

@abhipth abhipth commented Apr 27, 2021

What type of PR is this?
Custom networking e2e test suite.

Which issue does this PR fix:
Adds e2e test which tests the following.

  • Associate a new CIDR Range to VPC.
  • Create subnet in all AZs.
  • Create ENIConfig in that AZ.
  • Verifies Pods's IP deployed after enabling custom networking belong to the new CIDR Range.
  • Verifies traffic between pods is allowed/restricted using the new SG from ENIConfig.

What does this PR do / Why do we need it:
Adds e2e test suite for custom networking.

Testing done on this change:

Running Suite: CNI Custom Networking e2e Test Suite
===================================================
Random Seed: 1620128219
Will run 3 of 3 specs

STEP: creating test namespace
STEP: getting the cluster VPC Config
STEP: creating ec2 key-pair for the new node group
STEP: creating security group to be used by custom networking
STEP: authorizing egress and ingress on security group for single port
STEP: associating cidr range to the VPC
STEP: creating the subnet in us-west-2a
STEP: associating the route table with the newly created subnet
STEP: creating the ENIConfig with az name
STEP: creating the subnet in us-west-2b
STEP: associating the route table with the newly created subnet
STEP: creating the ENIConfig with az name
STEP: creating the subnet in us-west-2c
STEP: associating the route table with the newly created subnet
STEP: creating the ENIConfig with az name
STEP: enabling custom networking on aws-node DaemonSet
STEP: getting the aws-node daemon set in namesapce kube-system
STEP: setting the environment variables on the ds to map[AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG:true ENI_CONFIG_LABEL_DEF:failure-domain.beta.kubernetes.io/zone WARM_ENI_TARGET:0]
STEP: updating the daemon set with new environment variable
STEP: creating a new self managed node group
Custom Networking Test when creating deployment targeted using ENIConfig when connecting to reachable port 
  should connect
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:119
STEP: verifying pod's IP 10.10.5.185 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.149 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.199 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.28 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.114 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.86 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.124 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.98 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.45 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.107 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.170 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.93 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.207 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.133 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.85 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.38 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.141 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.216 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.118 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.158 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.146 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.79 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.54 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.46 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.54 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.12 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.124 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.5.26 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.218 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080
STEP: verifying pod's IP 10.10.0.181 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod succeeds on port 8080

• [SLOW TEST:218.443 seconds]
Custom Networking Test
/Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:32
  when creating deployment targeted using ENIConfig
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:44
    when connecting to reachable port
    /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:112
      should connect
      /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:119
------------------------------
Custom Networking Test when creating deployment targeted using ENIConfig when connecting to unreachable port 
  should fail to connect
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:129
STEP: verifying pod's IP 10.10.0.88 address belong to the CIDR range 10.10.0.0/16
STEP: verifying connection to pod fails on port 8081

• [SLOW TEST:13.175 seconds]
Custom Networking Test
/Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:32
  when creating deployment targeted using ENIConfig
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:44
    when connecting to unreachable port
    /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:122
      should fail to connect
      /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:129
------------------------------
Custom Networking Test when creating deployment on nodes that don't have ENIConfig 
  deployment should not become ready
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:152
STEP: deleting all existing ENIConfigs
STEP: getting the list of nodes created
STEP: terminating all the nodes
STEP: waiting for the node to be removed
STEP: waiting for all nodes to become ready
STEP: verifying deployment should not succeed
STEP: creating the deleted ENIConfigs

• [SLOW TEST:345.272 seconds]
Custom Networking Test
/Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:32
  when creating deployment on nodes that don't have ENIConfig
  /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:133
    deployment should not become ready
    /Users/abhipth/go/src/github.com/aws/amazon-vpc-cni-k8s/test/e2e/custom-networking/custom_networking_test.go:152
------------------------------
STEP: deleting test namespace
STEP: waiting for some time to allow CNI to delete ENI for IP being cooled down
STEP: deleting the self managed node group
STEP: deleting the key pair
STEP: deleting the subnet subnet-0760a50620434791c
STEP: deleting the subnet subnet-0d7b8c0bf1b5ed235
STEP: deleting the subnet subnet-0e1a4e33fc280028a
STEP: disassociating the CIDR range to the VPC
STEP: disabling custom networking on aws-node DaemonSet
STEP: getting the aws-node daemon set in namesapce kube-system
STEP: setting the environment variables on the ds to map[AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG:{} ENI_CONFIG_LABEL_DEF:{} WARM_ENI_TARGET:{}]
STEP: updating the daemon set with new environment variable
STEP: deleting ENIConfig
STEP: deleting ENIConfig
STEP: deleting ENIConfig

Ran 3 of 3 Specs in 1121.809 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS

Ginkgo ran 1 suite in 18m49.809391191s
Test Suite Passed

Automation added to e2e:
Yes

Will this break upgrades or downgrades. Has updating a running cluster been tested?:
NA

Does this change require updates to the CNI daemonset config files to work?:
NA*

*Does this PR introduce any user-facing change?**:
NA

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Copy link
Contributor

@jayanthvn jayanthvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :)

}

By("disassociating the CIDR range to the VPC")
err = f.CloudServices.EC2().DisAssociateVPCCIDRBlock(cidrBlockAssociationID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't we delete the VPC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For all the e2e test, I was thinking of having a central entry point which creates the cluster and runs all e2e test and after execution of the e2e suites the entrypoint would delete all cluster resources.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure that would work too :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants