Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node taints for Node Groups #962

Closed
1 of 4 tasks
mattlawnz opened this issue Jul 30, 2020 · 4 comments
Closed
1 of 4 tasks

Node taints for Node Groups #962

mattlawnz opened this issue Jul 30, 2020 · 4 comments

Comments

@mattlawnz
Copy link

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

I need to be able to set taints on newly created node groups. These node groups will be highly varialbe in terms of sizing, so the taints are required on creation.

If this is a bug, how to reproduce? Please include a code sample if relevant.

I thought of adding kubelt_extra_args to the node group defination, but it didnt work, or perhaps thats not the right way to do it?

    az3-c = {
      desired_capacity = 1
      max_capacity     = 2
      min_capacity     = 1
      subnets          = ["subnet-004514be54bcd7eb7"]
      kubelet_extra_args = "--node-labels=function=data_only --register-with-taints=function=data_only:NoSchedule"
      instance_type = "m5a.4xlarge"
      additional_tags = {
        Name = module.eks_dev_label.id
      }
    }

What's the expected behavior?

Was hoping for a node taint to be applied.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: "12.1.0"
  • OS: Mac
  • Terraform version: Terraform v0.12.26

Any other relevant info

@max-rocket-internet
Copy link
Contributor

Maybe it helps you to see what I use?

  worker_groups_launch_template = [
    {
      name                    = "prometheus-spot-1"
      override_instance_types = ["m5.2xlarge", "r5.xlarge", "r5ad.xlarge"]
      spot_instance_pools     = 3
      asg_max_size            = 5
      additional_userdata     = module.bastion_vpc1.user_data_users
      kubelet_extra_args = join(" ", [
        "--node-labels=cluster_name=xxxxxxx,kubernetes.io/lifecycle=spot,worker_group=prometheus-spot-1,app=prometheus",
        "--register-with-taints app=prometheus:NoSchedule"
      ])
      enabled_metrics = local.enabled_metrics
      subnets         = aws_subnet.private_extra.*.id
      tags = [
        {
          "key"                 = "k8s.io/cluster-autoscaler/enabled"
          "propagate_at_launch" = "false"
          "value"               = "true"
        },
        {
          "key"                 = "k8s.io/cluster-autoscaler/xxxxx"
          "propagate_at_launch" = "false"
          "value"               = "true"
        }
      ]
    }
  ]

@dpiddockcmp
Copy link
Contributor

Hi @mattlawnz, it looks like you're using the Managed Node Groups. These do not currently support node tainting. There is a request over on the AWS team's roadmap for this feature: aws/containers-roadmap#864

If you really need to work with node taints then I suggest you use the traditional worker groups similar to Max's example above.

@mattlawnz
Copy link
Author

Thanks all for the examples & explanation. I have this working as a worker group.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants