-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes nginx-ingress controller cannot create load balancer #183
Comments
That's very odd. |
I created two complete new EKS clusters this week in |
I just created a new cluster and saw the same error, but in the end it did create the load balancer without doing any changes to the IAM policies. |
Hmmm. I don't really understand why this is. The EKS service should be using a service-linked role named AWSServiceRoleForElasticLoadBalancing and this includes this permission: |
I think there were a couple of issues around the service-linked role before. My issue seems to match #87 and #103 (and also this StackOverflow issue): The problem seems to be caused by the ingress controller trying to create the very first load balancer in that specfic AWS account. As stated in the other linked issues, it might be the case that the problem will not occur if there is already another load balancer active in the AWS account. In my case there is no other load balancer, which might trigger the issue. In order to fix the issue, I see two possible paths. Wanted to see what you think about them:
Option 2 would look something like this: In
Then in the consumer of the EKS module, do this:
|
I think this is correct. I asked in the AWS Slack org and someone said the same: I think option 1 is better and cleaner than option 2 but would be even better and make more sense would be for AWS to add this to THEIR policy. I'll ask them. |
Thanks for researching this @max-rocket-internet! |
Also one more data point: after I added the permission and the ingress was able to create the NLB, I completely destroyed the EKS cluster. At that point, no more load balancer was in that AWS account. When I then created the new EKS, I ran into the same permission issue as before. So in case it helps to temporarily create a first ELB to permanently get rid of this problem, it might require using a load balancer outside EKS (as mentioned by Chris Hein above). Haven't tested this though. |
@jens-totemic: The adding / removing the load balancer is a red herring, as that will only result in the provisioning of the required service link role for continued ELB operation. This is happening regardless of the presence of the service link role. The root cause is that the EKS Cluster policy It is actually missing 3 permissions:
Not sure how these perms are being used in the specific setup exposing the error, will try a simple test to reproduce via a cli call. For reference, could you provide the ingress controller config you are using? As @max-rocket-internet noted, the real solution is getting AWS to correct the cluster policy, or at least them noting why those three documented permissions were omitted. When they added I'll follow up in the AWS #kubernetes channel later to see what additional info they can provide. |
@mmcaya I'm setting up the ingress using the standard configuration files provided by kubernetes like this:
Thanks for investigating this further! |
The same happened to me. In my case tagging was a reason. As EKS documentations states, EKS add some tags to resources like VPC or subnets. https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html As you might expect managing tags by terraform and by EKS itself at the same time can end up in some unpredictable situations. I my case I first run this module, and it created EKS, which tagged subnets and VPC. I made some mistakes in configuration so iterated a bit and run terraform couple of times. Terraform obviously removed EKS managed tags, which are Then I stumbled upon this issue: hashicorp/terraform#6632 , where I found a solution that worked for me. Basically I've added this lifecycle rule to my VPC and Subnets, to prevent terraform from removing EKS managed tags:
What is interesting is that you don't have to provide full tag name - but prefix is enough. This is really important in this case because after |
@pangorgo maybe the example helps you? See this line: https:/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks_test_fixture/main.tf#L126 |
Yeah, example is legit as long as you manage VPC and EKS in the same directory (root module). |
I'm gonna close this now. Feel free to reopen if it's still an issue. |
I just wanna leave a note here because im trying to build a terraform config that creates an EKS cluster in a new vpc with new subnets and installs helm/tiller then installs some packages all in one script. I was pulling my hair out about this until i read @pangorgo say
which made me check the tags on my subnets. Only then did i find out that i didnt ever check to see if the tags were applied right. I was doing
turns out there was a reason why this was done with the EDIT: Figured i should post the correct resource for completeness
|
AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB creation on AWS with K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB creation on AWS with K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
…ster (#902) AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. #900 #183 (comment)
…ster (terraform-aws-modules#902) AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. terraform-aws-modules#900 terraform-aws-modules#183 (comment)
…ster (#902) AmazonEKSClusterPolicy IAM policy doesn't contain all necessary permissions to create ELB service-linked role required during LB provisioning at AWS by K8S Service. terraform-aws-modules/terraform-aws-eks#900 terraform-aws-modules/terraform-aws-eks#183 (comment)
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm submitting a...
What is the current behavior?
No ELB or NLB load balancer is created by EKS when using nginx ingress controller
If this is a bug, how to reproduce? Please include a code sample if relevant.
When defining an nginx ingress controller (using defaults that use the classic ELB load balancer) in Kubernetes and watching the output in the Kubernetes dashboard, the following error message is shown
What's the expected behavior?
A load balancer should be created
Are you able to fix this problem and submit a PR? Link here if you have already.
This problem is probably related to #103. The problem is fixed when adding this role policy:
Environment details
Any other relevant info
The text was updated successfully, but these errors were encountered: