Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terrorm Plan Fails for v8.0.0 #675

Closed
1 of 4 tasks
ageekymonk opened this issue Jan 10, 2020 · 12 comments · Fixed by #722
Closed
1 of 4 tasks

Terrorm Plan Fails for v8.0.0 #675

ageekymonk opened this issue Jan 10, 2020 · 12 comments · Fixed by #722

Comments

@ageekymonk
Copy link

ageekymonk commented Jan 10, 2020

I have issues

When terraform plan is applied for v8.0.0 it fails with the following error.

Error: Invalid count argument

  on ../../../terraform-aws-eks/cluster.tf line 42, in resource "aws_security_group" "cluster":
  42:   count       = var.cluster_security_group_id == "" && var.create_eks ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

make: *** [plan] Error 1

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

If this is a bug, how to reproduce? Please include a code sample if relevant.

module "eks_cluster" {
  source                          = "git::https://dev.azure.com/tfnsw-cdst/cds-infra/_git/terraform-aws-eks?ref=v8.0.0"
  cluster_name                    = var.name_prefix
  subnets                         = tolist(data.aws_subnet_ids.tier2.ids)
  vpc_id                          = var.vpc
  cluster_security_group_id       = aws_security_group.cluster.id
  cluster_endpoint_public_access  = false
  cluster_endpoint_private_access = true
  write_kubeconfig                = false
  manage_aws_auth                 = true

  worker_groups = [
    {
      instance_type        = var.instance_type
      asg_min_size         = 3
      asg_desired_capacity = 3
      asg_max_size         = 5
      autoscaling_enabled  = true
      tags = [{
        key                 = "owner"
        value               = var.team_name
        propagate_at_launch = true
      }]
    }
  ]

}

What's the expected behavior?

terraform plan should succeed.

Are you able to fix this problem and submit a PR? Link here if you have already.

No

Environment details

  • Affected module version: v8.0.0
  • OS: Mac
  • Terraform version: Terraform v0.12.19

Any other relevant info

@barryib
Copy link
Member

barryib commented Jan 10, 2020

Thank @ageekymonk for raising this issue. Do you have more output from Terraform ? The snippet you provide only tell us where is the so-called bug, but not what is the problem.

@ageekymonk
Copy link
Author

ageekymonk commented Jan 10, 2020

@barryib Updated the comment to provide the full log

@barryib
Copy link
Member

barryib commented Jan 10, 2020

Did you see this from the changelog ?

Breaking: Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)

For security group whitelisting change. After upgrade, have to remove cluster_create_security_group and worker_create_security_group variable. If you have whitelist worker security group before, you will have to delete it(and apply again) or import it.

terraform import module.eks.aws_security_group_rule.cluster_https_worker_ingress <CONTROL_PLANE_SECURITY_GROUP_ID>_ingress_tcp_443_443_<WORKER_SECURITY_GROUP_ID>

For more details #631

@ageekymonk
Copy link
Author

ageekymonk commented Jan 10, 2020

Thanks for response @barryib .

I am creating a new cluster with security groups created separately. With this change I would not be able to create a cluster and assign the security group at the same time. I have to do a two step process. First create the security group by specifying target to just to create the security group then do cluster creation. It would be better if it could be done all at once.

I reckon most people would like to create it with just terraform apply rather than two steps.
My 2 cents.

@barryib
Copy link
Member

barryib commented Jan 10, 2020

This is quite annoying. But I think the apply should work.

Maybe @ryanooi could help us with this.

@ryanooi
Copy link
Contributor

ryanooi commented Jan 16, 2020

Sorry. will take a look

@ryanooi
Copy link
Contributor

ryanooi commented Jan 16, 2020

Thanks for response @barryib .

I am creating a new cluster with security groups created separately. With this change I would not be able to create a cluster and assign the security group at the same time. I have to do a two step process. First create the security group by specifying target to just to create the security group then do cluster creation. It would be better if it could be done all at once.

I reckon most people would like to create it with just terraform apply rather than two steps.
My 2 cents.

Hi @ageekymonk I'm able to reproduce your issue. To understand more, why would you like to create the security group separately?

@TarekAS
Copy link
Contributor

TarekAS commented Jan 23, 2020

Also facing a related issue.

We use the same security group for multiple (identical) clusters. Having to create cluster_https_worker_ingress for every cluster would cause duplication error. Even if duplicates were ignored, removing one of the clusters would cause the rule to be removed, which is unwanted behavior.

In addition, our worker security group uses inline ingress rules, which is incompatible with the resource rules.

Hence, I would like the option to completely manage our SGs separately.

@alexconlin
Copy link

I am facing the same issue. @ryanooi the reason I want to create the security group separately is that I want to allow access to the cluster api endpoint (which is private) from the IP address of the server where I'm running terraform. If the cluster isn't created with that access allowed then apply hangs forever because it uses the response from the api endpoint to determine whether the cluster has been created.

alexconlin added a commit to mergermarket/terraform-aws-eks that referenced this issue Jan 29, 2020
@ryanooi
Copy link
Contributor

ryanooi commented Jan 30, 2020

I guess i made the wrong "assumption" and it's different with all people here. I also using the same worker node security group for all the cluster. The difference is I create the security group in another workspace with all the basic network stuffs (i called it network layer) and I refer it using remote state.
The initial method was have a Boolean flag variable to indicate the module manage the security group or not. And if it's not, it will "assume" user will also pass in the security group id. There is no condition check that make sure user pass in both variables. But of course, it will solve the first issue (and crash later on).

Let me revert the diff and add back Boolean flag variable

@ryanooi
Copy link
Contributor

ryanooi commented Jan 30, 2020

Sorry all. To not block you guys. I'll revert all my changes.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 27, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants