Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform destroy errors when using worker_group_launch_templates #841

Closed
1 of 4 tasks
sr-n opened this issue Apr 21, 2020 · 1 comment
Closed
1 of 4 tasks

terraform destroy errors when using worker_group_launch_templates #841

sr-n opened this issue Apr 21, 2020 · 1 comment

Comments

@sr-n
Copy link
Contributor

sr-n commented Apr 21, 2020

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

When using worker group launch templates with managed IAM resources, if an error happens during terraform destroy after destroying the aws_iam_instance_profile.workers_launch_template but before all resources are destroyed, when running terraform destroy again, this error will happen,

Error: Error in function call

  on .terraform/modules/eks/terraform-aws-eks-11.0.0/aws_auth.tf line 8, in locals:
   8:         coalescelist(
   9:
  10:
  11:
    |----------------
    | aws_iam_instance_profile.workers_launch_template is empty tuple
    | data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile is empty tuple

Call to function "coalescelist" failed: no non-null arguments.

If this is a bug, how to reproduce? Please include a code sample if relevant.

The bug seems to arise when errors while running terraform destroy interrupt the process, leading to an inconsistent state where aws_iam_instance_profile.workers_launch_template has been deleted, but the local.auth_launch_template_worker_roles variable depends on its existence, so this leads to a new error when rerunning terraform destroy, preventing teardown even after the original error has been fixed. This should happen in any cluster using worker_group_launch_templates when manage_worker_iam_resources = true, but the specific configuration I was using was

data "aws_availability_zones" "available" {}

resource "random_id" "cluster_suffix" {
  byte_length = 2
}

locals {
  cluster_name = join("-", ["cluster",  random_id.cluster_suffix.hex])
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.6.0"

  name                 = "${local.cluster_name}-vpc"
  cidr                 = "10.0.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  public_subnets       = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
  public_subnet_tags   = { "kubernetes.io/role/elb" = "1" }
  enable_dns_hostnames = true
}

module "eks" {
  source       = "terraform-aws-modules/eks/aws"
  cluster_name = local.cluster_name
  subnets      = module.vpc.public_subnets
  vpc_id       = module.vpc.vpc_id

  wait_for_cluster_cmd = "for i in `seq 1 60`; do curl --insecure --silent $ENDPOINT/healthz >/dev/null && exit 0 || true; sleep 5; done; echo TIMEOUT && exit 1"

  worker_groups_launch_template = [
    {
      name                    = "${local.cluster_name}-workers"
      override_instance_types = ["c5.4xlarge"]
      iam_role_id             = aws_iam_role.node.id
      spot_instance_pools     = 1
      asg_max_size            = 1
      asg_desired_capacity    = 1
      kubelet_extra_args      = "--node-labels=kubernetes.io/lifecycle=spot"
      public_ip               = true
    },
  ]
}

What's the expected behavior?

This should not prevent the module from destroying remaining resources when rerunning terraform destroy

Are you able to fix this problem and submit a PR? Link here if you have already.

Yes, the solution is a one-line fix that already exists when using worker_groups, but not for worker_groups_launch_templates.

#842

Environment details

  • Affected module version: v11.0.0
  • OS: macOS
  • Terraform version: v0.12.24

Any other relevant info

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant