Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eks: After upgrading to 2.80, imported cluster resources fail to deploy #25835

Closed
okoskine opened this issue Jun 2, 2023 · 18 comments · Fixed by #25908
Closed

eks: After upgrading to 2.80, imported cluster resources fail to deploy #25835

okoskine opened this issue Jun 2, 2023 · 18 comments · Fixed by #25908
Assignees
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. guidance Question that needs advice or information. response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.

Comments

@okoskine
Copy link

okoskine commented Jun 2, 2023

Describe the bug

After upgrading CDK to 2.80, the EKS cluster creation role no longer can be assumed by separate handlers in stacks that import the EKS cluster as explained in #25674.

The suggested fix of importing the whole kubectlProvider results in Modifying service token is not allowed. error when trying to deploy the stack.

Expected Behavior

Kubernetes resources in the importing stack should still deploy in 2.80+ after adjusting the cluster importing as mentioned in #25674

Current Behavior

cdk8s chart in the importing stack fails to deploy after the upgrade.

StackNameRedacted: creating CloudFormation changeset...
2:20:09 PM | UPDATE_FAILED | Custom::AWSCDK-EKS-KubernetesResource | KubeClusterCommonK8sChartA8489958
Modifying service token is not allowed.

❌ StackNameRedacted failed: Error: The stack named StackNameRedacted failed to deploy: UPDATE_ROLLBACK_COMPLETE: Modifying service token is not allowed.
at FullCloudFormationDeployment.monitorDeployment (/workdir/node_modules/aws-cdk/lib/index.js:397:10236)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.deployStack2 [as deployStack] (/workdir/node_modules/aws-cdk/lib/index.js:400:149977)
at async /workdir/node_modules/aws-cdk/lib/index.js:400:135508

❌ Deployment failed: Error: The stack named StackNameRedacted failed to deploy: UPDATE_ROLLBACK_COMPLETE: Modifying service token is not allowed.
at FullCloudFormationDeployment.monitorDeployment (/workdir/node_modules/aws-cdk/lib/index.js:397:10236)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.deployStack2 [as deployStack] (/workdir/node_modules/aws-cdk/lib/index.js:400:149977)
at async /workdir/node_modules/aws-cdk/lib/index.js:400:135508

The stack named StackNameRedacted failed to deploy: UPDATE_ROLLBACK_COMPLETE: Modifying service token is not allowed.

Reproduction Steps

Export (app A, stack A)

const kubectlProvider = cluster.stack.node
  .findChild('@aws-cdk--aws-eks.KubectlProvider') as eks.KubectlProvider

new CfnOutput(scope, 'KubectlProviderRole', {
  exportName: 'KubectlRoleArn',
  value: kubectlProvider.roleArn,
});

new CfnOutput(scope, 'KubectlProviderHandlerRole', {
  exportName: 'KubectlHandlerRoleArn',
  value: kubectlProvider.handlerRole.roleArn,
});

const kubectlHandler = kubectlProvider.node.findChild('Handler') as lambda.IFunction;

new CfnOutput(scope, 'KubectlProviderHandler', {
  exportName: 'KubectlHandlerArn',
  value: kubectlHandler.functionArn,
});

// tried also
// new CfnOutput(scope, 'KubectlProviderHandler', {
//   exportName: 'KubectlHandlerArn',
//   value: kubectlProvider.serviceToken,
// });

Import (app B, stack B)

  const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(
    scope,
    'KubectlProvider',
    {
      functionArn: Fn.importValue('KubectlHandlerArn'),
      handlerRole: iam.Role.fromRoleArn(
        scope,
        'HandlerRole',
        Fn.importValue('KubectlHandlerRoleArn')
      ),
      kubectlRoleArn: Fn.importValue('KubectlRoleArn'),
    }
  );

  const openIdConnectProvider = iam.OpenIdConnectProvider.fromOpenIdConnectProviderArn(
    scope,
    'KubeOidcProvider',
    Fn.importValue('KubernetesOidcProviderArn')
  );

  const kubectlSecurityGroupId = Fn.importValue('KubernetesControlPlaneSGId');
  return eks.Cluster.fromClusterAttributes(scope, 'KubeCluster', {
    clusterName,
    kubectlPrivateSubnetIds: Fn.split(',', Fn.importValue('KubernetesPrivateSubnetIds')),
    kubectlSecurityGroupId: kubectlSecurityGroupId,
    clusterSecurityGroupId: kubectlSecurityGroupId,
    vpc,
    openIdConnectProvider,
    kubectlLayer: new KubectlV25Layer(scope, 'kubectl'),
    kubectlProvider: kubectlProvider,
  });

Then trying to deploy the (already existing) stack B fails.

Possible Solution

No response

Additional Information/Context

Both of the stacks have been in use for a long time and the only change to the cluster importing was to replace kubectlRoleArn with kubectlProvider. They are part of bigger ckd apps and this problem affects multiple stacks in multiple importing apps.

CDK CLI Version

2.81.0 (build bd920f2)

Framework Version

No response

Node.js Version

v18.14.2

OS

Linux

Language

Typescript

Language Version

TypeScript (3.9.10)

Other information

Came across the following earlier issue with similar symptoms from back when eks was experimental in CDK: #6129.

@okoskine okoskine added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Jun 2, 2023
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Jun 2, 2023
@peterwoodworth
Copy link
Contributor

Could you post the cdk diff of the stack that fails to deploy please? Full stack examples that I can copy + paste would be helpful too. I'm not sure why the service token would be changing, which it looks like is causing the failure

@peterwoodworth peterwoodworth added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. p1 and removed needs-triage This issue or PR still needs to be triaged. labels Jun 2, 2023
@okoskine
Copy link
Author

okoskine commented Jun 3, 2023

Here's a cdk diff of the stack:

Stack StackB
Resources
[-] AWS::CloudFormation::Stack StackBKubeClusterB9F6EB70-KubectlProvider.NestedStack/StackBKubeClusterB9F6EB70-KubectlProvider.NestedStackResource StackBKubeClusterB9F6EB70KubectlProviderNestedStackStackBKubeClusterB9F6EB70KubectlProviderNestedStackResourceFC006381 destroy
[~] Custom::AWSCDK-EKS-KubernetesResource KubeCluster/CommonK8sChart/Resource KubeClusterCommonK8sChartA8489958 
 ├─ [~] RoleArn
 │   └─ [~] .Fn::ImportValue:
 │       ├─ [-] KubernetesMastersRoleArn
 │       └─ [+] KubectlRoleArn
 └─ [~] ServiceToken
     ├─ [-] Removed: .Fn::GetAtt
     └─ [+] Added: .Fn::ImportValue

KubernetesMastersRoleArn was the name of the old output from the main EKS stack for the kubectl role (cluster.kubectlRole.roleArn).

In more detail from synthesized output the ServiceToken used to be:

      ServiceToken:
        Fn::GetAtt:
          - StackBKubeClusterB9F6EB70KubectlProviderNestedStackStackBKubeClusterB9F6EB70KubectlProviderNestedStackResourceFC006381
          - Outputs.StackBStackBKubeClusterB9F6EB70KubectlProviderframeworkonEvent2980E169Arn

And now is:

      ServiceToken:
        Fn::ImportValue: KubectlHandlerArn

To me this seems to be because ImportedKubectlProvider uses the handler arn as the service token (

this.serviceToken = props.functionArn;
).

If you need any more information please ask. I can also try to reproduce this minimally but it might take some time.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Jun 3, 2023
@peterwoodworth
Copy link
Contributor

What are you doing in stack B to create resources? I'm interested in reproducing this, and being able to find a workaround, but I'm not super sure what exactly you're doing or what changed in stack B, the snippet posted only imports resources

@peterwoodworth
Copy link
Contributor

The service token changing may have a workaround, or there could be an alternate workaround available, I'm not super sure how exactly to reach this state though.

@peterwoodworth peterwoodworth added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Jun 6, 2023
@okoskine
Copy link
Author

okoskine commented Jun 6, 2023

Ok, I reproduced this with simple stacks. Steps:

  • deploy A with 2.79.0
  • deploy B (2.79.0 variant) with 2.79.0
  • deploy A with 2.80.0
  • deploy B (2.80.0 variant) with 2.80.0 fails with error:
1:47:25 PM | UPDATE_FAILED        | Custom::AWSCDK-EKS-KubernetesResource | ImportedClustermanifestmanifestD43ADCFB
Modifying service token is not allowed.
 ❌  KubetestStackB failed: Error: The stack named KubetestStackB failed to deploy: UPDATE_ROLLBACK_COMPLETE: Modifying service token is not allowed.
    at FullCloudFormationDeployment.monitorDeployment (/home/onni/import1/node_modules/aws-cdk/lib/index.js:397:10236)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.deployStack2 [as deployStack] (/home/onni/import1/node_modules/aws-cdk/lib/index.js:400:149585)
    at async /home/onni/import1/node_modules/aws-cdk/lib/index.js:400:135508
 ❌ Deployment failed: Error: The stack named KubetestStackB failed to deploy: UPDATE_ROLLBACK_COMPLETE: Modifying service token is not allowed.
    at FullCloudFormationDeployment.monitorDeployment (/home/onni/import1/node_modules/aws-cdk/lib/index.js:397:10236)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.deployStack2 [as deployStack] (/home/onni/import1/node_modules/aws-cdk/lib/index.js:400:149585)
    at async /home/onni/import1/node_modules/aws-cdk/lib/index.js:400:135508

Stacks:
A

export class KubetestStackA extends Stack {
  output(name: string, value: string): void {
    new CfnOutput(this, name, { exportName: name, value: value });
  }
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    const cluster = new Cluster(this, 'TempCluster', {
      version: KubernetesVersion.V1_25,
      kubectlLayer: new KubectlV25Layer(this, 'kubectl'),
      clusterName: "TempCluster",
    });

    this.output('TempOidcProvider', cluster.openIdConnectProvider.openIdConnectProviderArn)
    this.output('TempSg', cluster.clusterSecurityGroupId)
    this.output('TempKubectlRole', cluster.kubectlRole?.roleArn as string)
    this.output('TempKubectlLambdaRole', cluster.kubectlLambdaRole?.roleArn as string)

    const kubectlProvider = this.node.findChild('@aws-cdk--aws-eks.KubectlProvider') as KubectlProvider;
    const kubectlHandler = kubectlProvider.node.findChild('Handler') as IFunction;
    this.output('TempKubectlHandler', kubectlHandler.functionArn)
  }
}

B 2.79.0 variant

export class KubetestStackB extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const vpc = Vpc.fromLookup(this, 'Vpc', { vpcName: 'KubetestStackA/TempCluster/DefaultVpc' });
    const openIdConnectProvider = OpenIdConnectProvider.fromOpenIdConnectProviderArn(this,'KubeOidcProvider', Fn.importValue('TempOidcProvider'));
    const sgId = Fn.importValue('TempSg');

    const cluster = Cluster.fromClusterAttributes(this, 'ImportedCluster', {
      clusterName: 'TempCluster',
      clusterSecurityGroupId: sgId,
      vpc,
      openIdConnectProvider,
      kubectlLayer: new KubectlV25Layer(this, 'kubectl'),
      kubectlRoleArn: Fn.importValue('TempKubectlRole'),
    });

    cluster.addManifest("manifest", {
      apiVersion: 'v1',
      kind: 'Namespace',
      metadata: {
        name: 'temp-namespace',
      },
    })
  }
}

B 2.80.0 variant

export class KubetestStackB extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const vpc = Vpc.fromLookup(this, 'Vpc', { vpcName: 'KubetestStackA/TempCluster/DefaultVpc' });
    const openIdConnectProvider = OpenIdConnectProvider.fromOpenIdConnectProviderArn(this,'KubeOidcProvider', Fn.importValue('TempOidcProvider'));
    const sgId = Fn.importValue('TempSg');

    const kubectlProvider = KubectlProvider.fromKubectlProviderAttributes(
        this,
        'KubectlProvider',
        {
          functionArn: Fn.importValue('TempKubectlHandler'),
          handlerRole: Role.fromRoleArn(
              this,
              'HandlerRole',
              Fn.importValue('TempKubectlLambdaRole')
          ),
          kubectlRoleArn: Fn.importValue('TempKubectlRole'),
        }
    );

    const cluster = Cluster.fromClusterAttributes(this, 'ImportedCluster', {
      clusterName: 'TempCluster',
      clusterSecurityGroupId: sgId,
      vpc,
      openIdConnectProvider,
      kubectlLayer: new KubectlV25Layer(this, 'kubectl'),
      kubectlProvider: kubectlProvider,
    });

    cluster.addManifest("manifest", {
      apiVersion: 'v1',
      kind: 'Namespace',
      metadata: {
        name: 'temp-namespace',
        labels: {
          'test-label': 'test-value'
        }
      },
    })
  }
}

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Jun 6, 2023
@pahud
Copy link
Contributor

pahud commented Jun 8, 2023

To me this seems to be because ImportedKubectlProvider uses the handler arn as the service token (

@okoskine Yes I think you are right. This might be the root cause. I am investigating this too. Will update here if I find anything and create a PR if necessary.

@pahud pahud self-assigned this Jun 8, 2023
@pahud
Copy link
Contributor

pahud commented Jun 8, 2023

This is my testing code below:

    const vpc = getOrCreateVpc(this);
    const kubectlLayer = new KubectlLayer(this, 'KubectlLayer');
    const cluster = new eks.Cluster(this, 'EksCluster', {
      vpc,
      version: eks.KubernetesVersion.V1_26,
      kubectlLayer,
      defaultCapacity: 0,
    });

    // EKS service role
    new CfnOutput(this, 'ClusterRole', { value: cluster.role.roleArn });
    // EKS cluster creation role
    new CfnOutput(this, 'ClusterAdminRole', { value: cluster.adminRole.roleArn });
    // Kubectl Role
    new CfnOutput(this, 'KubectlRole', { value: cluster.kubectlRole!.roleArn });
    // Kubectl Lambda Role
    new CfnOutput(this, 'KubectlLambdaRole', { value: cluster.kubectlLambdaRole!.roleArn });

    // import this cluster
    const kubectlProvider = cluster.stack.node.tryFindChild('@aws-cdk--aws-eks.KubectlProvider') as eks.KubectlProvider
    const kubectlHandler = kubectlProvider.node.tryFindChild('Handler') as lambda.IFunction;

    // import the kubectl provider
    const importedKubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', {
      functionArn: kubectlHandler.functionArn,
      kubectlRoleArn: cluster.kubectlRole!.roleArn,
      handlerRole: kubectlProvider.handlerRole,
    });

    const importedCluster = eks.Cluster.fromClusterAttributes(this, 'ImportedCluster', {
      clusterName: cluster.clusterName,
      vpc,
      kubectlLayer,
      kubectlRoleArn: cluster.kubectlRole?.roleArn,
      kubectlProvider: importedKubectlProvider,
    });

    importedCluster.addManifest("manifest", {
      apiVersion: 'v1',
      kind: 'Namespace',
      metadata: {
        name: 'temp-namespace',
        labels: {
          'test-label': 'test-value'
        }
      },
    });

The custom resource just can't complete

9:05:43 AM | CREATE_IN_PROGRESS | Custom::AWSCDK-EKS-KubernetesResource | ImportedCluster/ma...t/Resource/Default

I checked the cloudwatch logs for the handler function log and noticed the ServiceToken is pointing to the provider handler function rather than the ProviderframeworkonEvent handler, which caused this error.

{
    "RequestType": "Create",
    "ServiceToken": "arn:aws:lambda:us-east-1:XXXXXXXXXXXX:function:eks-test7-awscdkawseksKubectlProvi-Handler886CB40B-rzkXmaxGZMvF",
    "ResponseURL": "...",
    "StackId": "arn:aws:cloudformation:us-east-1:XXXXXXXXXXXX:stack/eks-test7/5eff0170-05f9-11ee-b6bf-0a84830639f1",
    "RequestId": "...",
    "LogicalResourceId": "ImportedClustermanifestmanifestD43ADCFB",
    "ResourceType": "Custom::AWSCDK-EKS-KubernetesResource",
    "ResourceProperties": {
        "ServiceToken": "arn:aws:lambda:us-east-1:XXXXXXXXXXXX:function:eks-test7-awscdkawseksKubectlProvi-Handler886CB40B-rzkXmaxGZMvF",
        "PruneLabel": "aws.cdk.eks/prune-c86af691495e8df1f111a4cae8b20f7138e0ab7d23",
        "ClusterName": "EksClusterFAB68BDB-03457c6509d54e34aa1241d08c7e4925",
        "Manifest": "[{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"name\":\"temp-namespace\",\"labels\":{\"aws.cdk.eks/prune-c86af691495e8df1f111a4cae8b20f7138e0ab7d23\":\"\",\"test-label\":\"test-value\"}}}]",
        "RoleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-test7-EksClusterCreationRole75AABE42-9MKBLO7RS2JR"
    }
}

I will try to submit a PR for that.

@mburket
Copy link

mburket commented Jun 9, 2023

Hi, I ran into the same error "Modifying service token is not allowed" for helm chart installation using the imported cluster.

@okoskine
Copy link
Author

@pahud The documentation change in the PR does not help in this case.

Note that we would have no problem importing the cluster in a new 2.80 stack. The problem is how to keep an imported cluster resource working in an existing stack when it was already created before 2.80 and did not have the full kubectlProvider imported.

Currently I don't know of better workarounds than downgrading to 2.79.

@dancmeyers
Copy link

Just commenting that we are in the same situation as @okoskine. All our existing stacks do not work, because of the reported Modifying service token is not allowed error. I did wonder if a possible solution would be to change the ID of the imported cluster, thus generating an all-new set of resources instead of attempting to change the existing set. Unfortunately, in that case CDK thinks that it has to create all-new resources on the 'new' cluster, but they already exist so you get the error

Error from server (AlreadyExists): error when creating "/tmp/manifest.yaml": ${RESOURCE_TYPE} "${RESOURCE_ID}" already exists

where ${RESOURCE_TYPE} and ${RESOURCE_ID} should be replaced by their actual values depending on what you're trying to deploy (deployment "app", or whatever).

It feels like the solution here might be something along the lines of searching for existing resources, and allowing an update on 'create' if they already exist. But then you run the risk of resources overwriting each other, so maybe you need a label to allow that to happen. But even if we did have that already, create-before-delete would mean CloudFormation would try to tidy up the 'old' resources and delete the ones it had just updated...

@mergify mergify bot closed this as completed in #25908 Jun 12, 2023
mergify bot pushed a commit that referenced this issue Jun 12, 2023
The imported eks clusters are having invalid service token and can't deploy new k8s manifests or helm charts.  This PR fixes that issue.

- [x] Update README and doc string. `functionArn` should be the custom resource provider's service token rather than the kubectl provider lambda arn. No breaking change in this PR.
- [x] Add a new integration test to ensure the imported cluster can always create manifests and helm charts.

Closes #25835

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@dancmeyers
Copy link

@okoskine Hope you don't mind me doing the duplication to flag this up as still a problem :)

@iliapolo
Copy link
Contributor

@dancmeyers you are correct, re-opening. I am working on adding the correct workaround for this. Stay tuned.

@iliapolo iliapolo reopened this Jun 13, 2023
@dancmeyers
Copy link

Thank you :) You've probably seen this already, but I'm coping my comment from there over for visibility:

Is there any way to lock this down with tags? I was trying to work out how to do that, I was thinking to edit the kubectlRole policy to allow assumption if the principal doing the assuming had a specific tag, and then set that tag on the roles the lambdas in other stacks run as. Problem is I've mostly used tags for identification before, not access control, so I wasn't sure of the logic in IAM there, let alone translating it into CFn and/or CDK. Especially as there's role assumption to execute the imported stack lambda anyway, so the principal is an assumed-role, not a role (I think, from what I was seeing in the error messages?), so I'm not sure how to link back to tags on the originating role.

@iliapolo
Copy link
Contributor

iliapolo commented Jun 13, 2023

I'm not sure that tags are the way to go...especially from a security perspective. An immediate workaround is to add the role of the existing kubectl provider (in the stack doing the import) to the trust policy of the cluster's creation role. i.e, in the stack that creates the cluster:

cluster.adminRole.assumeRolePolicy?.addStatements(<TBD>)

Then you just need to redeploy the cluster stack and everything should work. Note that since the problem only exhibits in existing resources, all the information is already known. For new resources in new stacks, the approach of importing the entire kubectlProvider should work, and is recommended.

Does that make sense?

@dancmeyers
Copy link

Yeah, it makes sense. I've got quite a few stacks to go through and get the roles from, but it's doable. Can't answer for @okoskine though.

@iliapolo
Copy link
Contributor

iliapolo commented Jun 13, 2023

Agreed its a bit tedious, but it is also a solution that requires the least amount of deployments (only one, for the stack that creates the cluster). Another option is to override the kubectlLambdaRole property of the imported cluster. This would require many deployments (to all stacks that import the cluster), but would leave the cluster stack untouched.

I've updated the issue to include the workaround for this scenario. @okoskine let us know if there are further concerns around this. Thanks!

Recopying here for further visibility


For imported cluster in existing stacks to continue to work, you will need to add the role of the kubectl provider function to the trust policy of the cluster's admin role:

// do this for each stack where you import the original cluster.
this.cluster.adminRole.assumeRolePolicy?.addStatements(new iam.PolicyStatement({
  actions: ['sts:AssumeRole'],
  principals: [iam.Role.fromRoleArn(this, 'KubectlHandlerImportStackRole', 'arn-of-kubectl-provider-function-in-import-stack')]
}));

To locate the relevant ARN, find the Lambda function in the import stack that has the "onEvent handler for EKS kubectl resource provider" description and use its role ARN. Redeploy the cluster stack and everything should work, no changes to the import stack needed.

Alternatively, you can do the reverse and specify the kubectlLambdaRole property when importing the cluster to point to the role of the original kubectl provider:

const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
  kubectlRoleArn: '',
  kubectlLambdaRole: iam.Role.fromRoleArn(this, 'KubectlLambdaRole', 'arn-of-kubectl-provider-function-role-in-cluster-stack'),
});

This will make it so the role of the new provider will be the same as the original provider, and as such, is already trusted by the creation role of the cluster.

@iliapolo iliapolo added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. guidance Question that needs advice or information. and removed bug This issue is a bug. p1 labels Jun 13, 2023
@github-actions
Copy link

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

@github-actions github-actions bot added closing-soon This issue will automatically close in 4 days unless further comments are made. closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service closed-for-staleness This issue was automatically closed because it hadn't received any attention in a while. guidance Question that needs advice or information. response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants