Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for configuring access_config to support aws_eks_access_policy_association #2881

Closed
sidewinder12s opened this issue Jan 19, 2024 · 20 comments
Milestone

Comments

@sidewinder12s
Copy link

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • No 🛑: please wait to file a request until the functionality is avaialble in the AWS provider
  • Yes ✅: please list the AWS provider version which introduced this functionality

Yes: v5.33.0

Is your request related to a problem? Please describe.

Cannot use the new eks_access_policy_associations without modifying the cluster config to support them.

Describe the solution you'd like.

Add support for the access_config block on the cluster resource.

Describe alternatives you've considered.

Can't configure it otherwise

Additional context

New PR for support in the provider was just landed: hashicorp/terraform-provider-aws#35037

@bryantbiggs
Copy link
Member

We'll have v20 shortly (should be by Monday)

@sidewinder12s
Copy link
Author

We'll have v20 shortly (should be by Monday)

Figured! Just didn't see an issue for this bit of change.

@bryantbiggs
Copy link
Member

I started adding this but there were some issues that have been patched and will go out on 5.34.0 - so we're going to be a bit longer waiting for that release - apologies

@Bharathkumarraju
Copy link

Bharathkumarraju commented Jan 22, 2024

@bryantbiggs @sidewinder12s thanks currently we are using eks cluster access management using aws-auth configmap like below we need to migrate to new cluster access management API... can we migrate existing cluster using the terraform module.. all apps go down right?

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.16.0"

  cluster_name                          = format("%s-${local.region_prefix}-${local.env}-eks", local.cluster_prefix)
  cluster_endpoint_public_access        = false
  cluster_additional_security_group_ids = [data.terraform_remote_state.network-sg.outputs.eks_cluster_sg_id_sin]
  cluster_version                       = local.cluster_version
  cloudwatch_log_group_kms_key_id       = data.terraform_remote_state.kms-cloudwatch-logs-sin.outputs.kms_key_cloudwatch_logs_sin.arn
  iam_role_additional_policies = {
    # cloudwatch = aws_iam_policy.node_cloudwatch_access.arn
  }

  vpc_id     = data.terraform_remote_state.network.outputs.main_vpc_id_sin
  subnet_ids = data.terraform_remote_state.network.outputs.main_vpc_private_subnets_sin

  # irsa
  enable_irsa = true

  # Fargate profiles use the cluster primary security group so these are not utilized
  create_cluster_security_group = false
  create_node_security_group    = false
  fargate_profile_defaults = {
    iam_role_additional_policies = {
      cloudwatch = aws_iam_policy.node_cloudwatch_access.arn
    }
  }
  fargate_profiles = {
    ab-system = { #fargate default profile - start
      name = "system-${local.env}"
      selectors = [
        {
          namespace = "kube-system"
        },
        {
          namespace = "aws-lb-controller"
        },
        {
          namespace = "external-secrets"
        }
      ]
    },
    test = {
      name = "test-${local.env}"
      selectors = [
        {
          namespace = "test-${local.env}"
        },
        {
          namespace = "amq-test-${local.env}"
        }
      ]
    },
    observability = {
      name = "monitoring"
      selectors = [
        {
          namespace = "monitoring"
        },
        {
          namespace = "fargate-container-insights"
        }
      ]
    }
  }

  manage_aws_auth_configmap = true
  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::${local.account_id}:role/svc-eks-cluster-admin-${local.env}"
      username = "svc-eks-cluster-admin-${local.env}-sin"
      groups   = ["system:masters"]
    },
    {
      rolearn  = "arn:aws:iam::${local.account_id}:role/svc-eks-cluster-readonly-${local.env}"
      username = "svc-eks-cluster-readonly-${local.env}-sin"
      groups   = ["cluster-readonly"]
    },
    {
      rolearn  = "arn:aws:iam::${local.account_id}:role/terraform-provisioner"
      username = "terraform-provisioner"
      groups   = ["system:masters"]
    },
    {
      rolearn  = "arn:aws:iam::${local.account_id}:role/svc-ec2-github-runner"
      username = "svc-ec2-github-runner"
      groups   = ["system:masters"]
    },
    {
      rolearn  = "arn:aws:iam::${local.account_id}:role/svc-github-actions-uat"
      username = "svc-github-actions-uat"
      groups   = ["system:masters"]
    }
  ]
  # External encryption key
  create_kms_key = false
  cluster_encryption_config = {
    resources        = ["secrets"]
    provider_key_arn = data.terraform_remote_state.secrets-kms-keys.outputs.kms_key_secrets_mgr_sin.arn
  }
  tags = local.tags
}

@sidewinder12s
Copy link
Author

The module does not yet support access_config. You'll likely be able to migrate to it once its supported without downtime, but it'll depend on how exactly its implemented.

@Bharathkumarraju
Copy link

The module does not yet support access_config. You'll likely be able to migrate to it once its supported without downtime, but it'll depend on how exactly its implemented.

Thanks @sidewinder12s for the reply looking forward for the module to support access_config hopefully coming soon right?

@sidewinder12s
Copy link
Author

Yes Bryant has implied he is trying to get this support into the next major release once the provider bugs have been cleared, along with a ton of other backed up requests.

@kimxogus
Copy link
Contributor

@bryantbiggs bryantbiggs added this to the v20.0.0 milestone Jan 26, 2024
@AlmirKadric
Copy link

AlmirKadric commented Jan 31, 2024

Until this feature is released, here is a temporary workaround using the bash and AWS CLI command. If you want to use API_AND_CONFIG_MAP you only need the first block, otherwise if you need API you will need both blocks as you can only change from CONFIG_MAP->API_AND_CONFIG_MAP and from API_AND_CONFIG_MAP->API.

# TODO: replace this with the "access_config" block once EKS module supports it
resource "null_resource" "eks-set-access-auth" {
    triggers = {
        endpoint = module.eks.cluster_endpoint
    }

    provisioner "local-exec" {
        command = <<EOT
            set -o nounset
            set -o errexit

            if [ "$(aws --region aws --region ${local.aws_region} eks describe-cluster --name ${module.eks.cluster_name} --output text --query cluster.accessConfig.authenticationMode)" = "API" ]; then
                echo "Cluster Config already updated (authenticationMode=API)"
                exit 0
            fi

            AUTH_MODE=API_AND_CONFIG_MAP
            if [ "$(aws --region aws --region ${local.aws_region} eks describe-cluster --name ${module.eks.cluster_name} --output text --query cluster.accessConfig.authenticationMode)" != "$${AUTH_MODE}" ]; then
                UPDATE_JSON=$(aws --region ${local.aws_region} eks update-cluster-config --name ${module.eks.cluster_name} --access-config "authenticationMode=$${AUTH_MODE}")
                UPDATE_ID=$(echo $${UPDATE_JSON} | grep '"id": "' | sed -E 's/.*"id": "([a-z0-9-]*)".*/\1/')
                echo "Waiting for update: $${UPDATE_ID}"
                while ! (aws --region ${local.aws_region} eks describe-update --name ${module.eks.cluster_name} --update-id $${UPDATE_ID} | grep -q '"status": "Successful"'); do
                    sleep 10
                done
                echo "Updated Cluster Config authenticationMode=$${AUTH_MODE}"
            fi

            AUTH_MODE=API
            if [ "$(aws --region aws --region ${local.aws_region} eks describe-cluster --name ${module.eks.cluster_name} --output text --query cluster.accessConfig.authenticationMode)" != "$${AUTH_MODE}" ]; then
                UPDATE_JSON=$(aws --region ${local.aws_region} eks update-cluster-config --name ${module.eks.cluster_name} --access-config "authenticationMode=$${AUTH_MODE}")
                UPDATE_ID=$(echo $${UPDATE_JSON} | grep '"id": "' | sed -E 's/.*"id": "([a-z0-9-]*)".*/\1/')
                echo "Waiting for update: $${UPDATE_ID}"
                while ! (aws --region ${local.aws_region} eks describe-update --name ${module.eks.cluster_name} --update-id $${UPDATE_ID} | grep -q '"status": "Successful"'); do
                    sleep 10
                done
                echo "Updated Cluster Config authenticationMode=$${AUTH_MODE}"
            fi
        EOT
        interpreter = ["bash", "-c"]
    }

    depends_on = [
        module.eks.cluster_arn,
        module.eks.fargate_profiles,
        module.eks.eks_managed_node_groups,
        module.eks.self_managed_node_groups,
    ]
}

Hope this helps!

@bryantbiggs
Copy link
Member

bryantbiggs commented Feb 2, 2024

added in #2858

@sidewinder12s
Copy link
Author

added in #2825

I think the correct issue is #2858

@bryantbiggs
Copy link
Member

ah yes, you are correct - its been a long week 😅

@sidewinder12s
Copy link
Author

Is there any example of how you are supposed to specify access_entries with the module? The v20 PR mentions removing the complete example of the module because there are other examples but there are no examples of how to use it and it's an any type variable which is super ambiguous

@bryantbiggs
Copy link
Member

bryantbiggs commented Feb 2, 2024

ah shoot - yes, I have some. Let me add those in, apologies

and to be clear - you don't have to do anything for nodegroups or Fargate profiles. EKS will manage the access entries for EKS managed nodegroups and Fargate profiles, the module will handle the entry for self-managed nodegroups

@sidewinder12s
Copy link
Author

We create our IAM Role outside of both the MNG and cluster modules, so when I was trying to grant that role to be a node access entry, I was having a lot of trouble figuring out if I can pass that through the cluster module since all the lookups and try functions appear to dump invalid configs.

@bryantbiggs
Copy link
Member

We create our IAM Role outside of both the MNG and cluster modules, so when I was trying to grant that role to be a node access entry, I was having a lot of trouble figuring out if I can pass that through the cluster module since all the lookups and try functions appear to dump invalid configs.

You would just pass the IAM role used by the nodegroup as usual - EKS managed nodegroups will automatically create the access entry when the authentication mode type is API or API_AND_CONFIG_MAP. If the role is already there and migrating from aws-auth ConfigMap, again EKS will migrate this automatically for you

@sidewinder12s
Copy link
Author

Am I correct in assuming you can't pass a node access entry through the current logic? (Say I was not using managed node groups at all)

@bryantbiggs
Copy link
Member

if you are using a self-managed nodegroup, or say a role used by nodes created by Karpenter - those are two areas where you would create an access entry for nodes. If you are using the self-managed nodegroup sub-module or Karpenter sub-module, you would either let them create the IAM role or provide an existing/external role and the modules will create the access entry for you. If you are not using any of that, you could pass it in via the generic access_entries argument but I need to investigate #2896 further

@sidewinder12s
Copy link
Author

sidewinder12s commented Feb 5, 2024

I commented on the PR linked to that issue, I think its correct at least for non-Standard access_entry types.

Separately I appeared to be running into another bug from the EKS API where it thinks I'm setting kubernetes_groups on my EC2_LINUX type access entry and throwing an error:

The specified kubernetesGroups is invalid: setting kubernetesGroups is not allowed when the type is "EC2_LINUX".

At least looking at CloudTrail, it didn't look like Terraform even passed that parameter at all. So I don't think its a Terraform or module issue.

Copy link

github-actions bot commented Mar 7, 2024

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants