Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add EKS Fargate support #866

Closed
wants to merge 31 commits into from

Conversation

itssimon
Copy link
Contributor

@itssimon itssimon commented May 6, 2020

PR o'clock

Description

  1. Add ingress rule to workers and cluster security groups to allow Fargate pods to communicate with EC2 nodes
  2. Allow EKS Fargate profiles to be created via input eks_fargate_profiles = [...]. This also creates an IAM role for pod execution and extends the aws-auth ConfigMap.

Checklist

@itssimon itssimon changed the title Add EKS Fargate support feat: Add EKS Fargate support May 6, 2020
@barryib
Copy link
Member

barryib commented May 6, 2020

@itssimon Thanks for working on this. But since this module is getting more complex and discussed in #635 or #774. We decided to split functionality into different submodules. We started recently with the managed node groups.

So my point here is this feature should be added as a submodule.

Can you please work in that direction ?

@barryib barryib linked an issue May 6, 2020 that may be closed by this pull request
4 tasks
@itssimon
Copy link
Contributor Author

Done @barryib

@itssimon
Copy link
Contributor Author

itssimon commented May 19, 2020

Is there anything else required for this to be approved and merged?

modules/fargate/fargate.tf Outdated Show resolved Hide resolved
variables.tf Outdated Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
modules/fargate/variables.tf Outdated Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
aws_auth.tf Outdated Show resolved Hide resolved
@barryib
Copy link
Member

barryib commented May 19, 2020

Is there anything else required for this to be approved and merged?

Sorry for the late answer. I was busy these last days. Reviewed.

Thanks again for working on this.

variables.tf Outdated Show resolved Hide resolved
@itssimon itssimon requested a review from barryib May 20, 2020 07:31
@itssimon
Copy link
Contributor Author

Thanks @barryib. I addressed all your review notes.

fargate.tf Outdated Show resolved Hide resolved
fargate.tf Outdated Show resolved Hide resolved
variables.tf Show resolved Hide resolved
@dalgibbard
Copy link

Hey all,
Apologies if this muddies the waters, but i've been trying to test this with keenness to implement it in our deployments :)

Can you confirm that this does allow for fargate-only cluster support too?
For example; i'm running with the following config:

data "aws_eks_cluster" "cluster" {
  name = module.eks_cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks_cluster.cluster_id
}

resource "aws_kms_key" "eks_cluster_key" {
  description = "KMS Key for use by EKS Fargate"
  tags        = var.tags
  key_usage   = "ENCRYPT_DECRYPT"
}

resource "aws_kms_alias" "eks_cluster_key_alias" {
  name = "alias/${var.eks_cluster_name}_eks_key"
  target_key_id = aws_kms_key.eks_cluster_key.id
}

resource "aws_kms_key" "eks_log_group_key" {
  description = "KMS Key for use for encrypting EKS logs"
  tags        = var.tags
  key_usage   = "ENCRYPT_DECRYPT"
  policy = jsonencode(
    {
      Version = "2012-10-17",
      Statement = [{
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${var.aws_account_id}:root"
        }
        Action = [
          "kms:*"
          ]
        Resource = "*"
      },
      {
        Effect = "Allow"
        Principal = {
          Service = "logs.${var.aws_region}.amazonaws.com"
        }
        Action = [ 
          "kms:Encrypt*",
          "kms:Decrypt*",
          "kms:ReEncrypt*",
          "kms:GenerateDataKey*",
          "kms:Describe*"
        ]
        Resource = "*"
      }]
  })
}

resource "aws_kms_alias" "eks_log_group_alias" {
  name = "alias/${var.eks_cluster_name}_eks_log_group_key"
  target_key_id = aws_kms_key.eks_log_group_key.id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks_cluster" {
    // TODO: point this to the main repo, once PR-886 is merged.
    //  https:/terraform-aws-modules/terraform-aws-eks/pull/866
  source          = "git::https:/terraform-aws-modules//terraform-aws-eks.git?ref=fargate"
  cluster_name    = var.eks_cluster_name
  cluster_version = var.eks_cluster_version
  subnets         = var.subnet_ids_list
  vpc_id          = var.vpc_id
  tags            = var.tags
  eks_fargate_profiles = {
      {
          namespace = "namespace1",
          labels = {}
      },
      {
          namespace = "namespace2",
          labels = {}
      }
  }

  worker_groups = []
  cluster_enabled_log_types = var.cluster_enabled_log_types
  manage_aws_auth = true
  map_roles = var.map_roles
  cluster_endpoint_private_access = true
  cluster_endpoint_public_access = false
  create_eks = true
  cluster_encryption_config = [
      {
          provider_key_arn = aws_kms_key.eks_cluster_key.arn
          resources = ["secrets"]
      }
  ]
  cluster_log_retention_in_days = var.cluster_log_retention_in_days
  cluster_log_kms_id = aws_kms_key.eks_log_group_key.id
}

With this, it's creating an EKS cluster (w/security groups, IAM roles etc), but not any of the aws-auth or fargate policies etc. Am I missing something?

@itssimon
Copy link
Contributor Author

itssimon commented May 26, 2020

I think I addressed all review notes.
Also removed the SG rules from this PR and created a separate PR for them: #892.

@@ -0,0 +1,10 @@
module "fargate" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a dependency on the aws-auth configmap. Otherwise spinning up fresh clusters with fargate enabled may fail due to the race condition of who creates the configmap first

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you provide some guidance on how to best achieve that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

node_groups.tf in the root of the module has one example. But honestly I think the rejected work in #867 is a more "terraform native" way of implementing it. Makes the module easier to use stand-alone as well. My mistake as I didn't think depending on vars had been implemented yet when I moved managed node groups to a module.

Copy link
Contributor Author

@itssimon itssimon Jun 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this dependency creates a cycle. Are you sure the fargate module would cause the aws-auth ConfigMap to be created?

Error: Cycle: data.null_data_source.fargate, module.fargate.var.cluster_name, module.fargate.aws_iam_role.eks_fargate_pod, module.fargate.output.aws_auth_roles, local.configmap_roles, kubernetes_config_map.aws_auth

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, AWS automatically creates the aws-auth configmap when you create a fargate profile on a cluster that does not have the configmap. This is a race condition that caused some pain for early adopters of the managed node groups in this module.

The problem with the cycle is using the cluster_name from the null_resource to create the IAM role. This could be avoided by using a dependency variable instead of the null_resource. Then only the aws_eks_fargate_profile needs to block on the configmap.

modules/fargate/fargate.tf Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
modules/fargate/fargate.tf Outdated Show resolved Hide resolved
modules/fargate/outputs.tf Outdated Show resolved Hide resolved
modules/fargate/variables.tf Outdated Show resolved Hide resolved
modules/fargate/variables.tf Outdated Show resolved Hide resolved

resource "aws_iam_role" "eks_fargate_pod" {
count = var.create_eks ? 1 : 0
name = format("%s-fargate", var.cluster_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cluster and worker roles are created using name_prefix by default. This allows for the same name cluster to be created in different regions as IAM is global. I have no idea if anybody actually does that. There is also the complexity around name_prefix vs name and var.workers_role_name. Do we want to replicate that here? @barryib

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only open topic to resolve now. How would you like to handle the naming here @barryib ?

Copy link
Contributor

@dpiddockcmp dpiddockcmp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add in a README.md for the module. See modules/node_groups

@@ -0,0 +1,10 @@
module "fargate" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, AWS automatically creates the aws-auth configmap when you create a fargate profile on a cluster that does not have the configmap. This is a race condition that caused some pain for early adopters of the managed node groups in this module.

The problem with the cycle is using the cluster_name from the null_resource to create the IAM role. This could be avoided by using a dependency variable instead of the null_resource. Then only the aws_eks_fargate_profile needs to block on the configmap.

}
}

resource "aws_iam_role" "eks_fargate_pod" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add in the iam_path variable from the parent too

variable "create_fargate_pod_execution_role" {
description = "Controls if the EKS Fargate pod execution IAM role should be created."
type = bool
default = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could creation of the IAM role be based on the length of fargate_profiles? That way this variable could default to true.

And if we're giving the option of not creating the fargate role we should provide the ability to pass in an externally created role ARN.

@stale
Copy link

stale bot commented Sep 4, 2020

This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
To track this PR (even if closed), please open a corresponding issue if one does not already exist.

@stale stale bot added the stale label Sep 4, 2020
@dpiddockcmp
Copy link
Contributor

dpiddockcmp commented Sep 4, 2020

A vote was kind of taken on #774 where a majority supported going full in on TF 0.13's for_each module support instead of looping inside the module. But 0.13 isn't exactly working too well with the module currently.

Raises design questions for this PR as the fargate submodule contains an IAM role.

@barryib barryib removed the stale label Sep 22, 2020
@barryib
Copy link
Member

barryib commented Oct 6, 2020

Thanks @itssimon for working on this. We have an terraform-aws-modules working session this friday. We'll discuss about the direction we want to take about this feature. We'll come back to you pretty soon.

barryib pushed a commit to Polyconseil/terraform-aws-eks that referenced this pull request Oct 25, 2020
barryib pushed a commit to Polyconseil/terraform-aws-eks that referenced this pull request Oct 26, 2020
@barryib barryib mentioned this pull request Oct 26, 2020
2 tasks
@barryib
Copy link
Member

barryib commented Oct 26, 2020

I'm closing this PR in favor of #1067. That PR takes all the great work of @itssimon and rework it to add requested changes.

@barryib barryib closed this Oct 26, 2020
barryib pushed a commit to Polyconseil/terraform-aws-eks that referenced this pull request Nov 2, 2020
barryib pushed a commit to Polyconseil/terraform-aws-eks that referenced this pull request Nov 4, 2020
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 16, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[feature] Fargate for EKS
4 participants