Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StackMapping changes getting reverted for pushed resolver functions #550

Closed
5 tasks done
lazpavel opened this issue Jun 14, 2022 · 1 comment
Closed
5 tasks done

Comments

@lazpavel
Copy link
Contributor

lazpavel commented Jun 14, 2022

Before opening, please confirm:

  • I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
  • I have searched for duplicate or closed issues.
  • I have read the guide for submitting bug reports.
  • I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
  • I have removed any sensitive information from my code snippets and submission.

How did you install the Amplify CLI?

npm

If applicable, what version of Node.js are you using?

v14.19.1

Amplify CLI Version

8.5.0

What operating system are you using?

macOS

Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.

no

Amplify Categories

api

Amplify Commands

push

Describe the bug

transform.conf.json StackMapping entries gets reverted for already pushed resources.
doc link: https://docs.amplify.aws/cli/graphql/override/#place-appsync-resolvers-in-custom-named-stacks

This makes impossible to fix the issue when the resources were once successfully pushed but at a later time changes in the functions makes the nested stack go over the 100000 bytes limit.

As a workaround once can completely remove the resolver function, execute amplify push, add the resolver function back and map it to a new nested stack and execute amplify push again

Code method that is reverting the StackMapping content can be found here:

async function ensureMissingStackMappings(config: ProjectOptions) {
const { currentCloudBackendDirectory } = config;
let transformOutput = undefined;
if (currentCloudBackendDirectory) {
const missingStackMappings = {};
transformOutput = await _buildProject(config);
const copyOfCloudBackend = await readFromPath(currentCloudBackendDirectory);
const stackMapping = transformOutput.stackMapping;
if (copyOfCloudBackend && copyOfCloudBackend.build && copyOfCloudBackend.build.stacks) {
// leave the custom stack alone. Don't split them into separate stacks
const customStacks = Object.keys(copyOfCloudBackend.stacks || {});
const stackNames = Object.keys(copyOfCloudBackend.build.stacks).filter(stack => !customStacks.includes(stack));
// We walk through each of the stacks that were deployed in the most recent deployment.
// If we find a resource that was deployed into a different stack than it should have
// we make a note of it and include it in the missing stack mapping.
for (const stackFileName of stackNames) {
const stackName = stackFileName.slice(0, stackFileName.length - path.extname(stackFileName).length);
const lastDeployedStack = JSON.parse(copyOfCloudBackend.build.stacks[stackFileName]);
if (lastDeployedStack) {
const resourceIdsInStack = Object.keys(lastDeployedStack.Resources);
for (const resourceId of resourceIdsInStack) {
if (stackMapping[resourceId] && stackName !== stackMapping[resourceId]) {
missingStackMappings[resourceId] = stackName;
}
}
const outputIdsInStack = Object.keys(lastDeployedStack.Outputs || {});
for (const outputId of outputIdsInStack) {
if (stackMapping[outputId] && stackName !== stackMapping[outputId]) {
missingStackMappings[outputId] = stackName;
}
}
}
}
// We then do the same thing with the root stack.
const lastDeployedStack = JSON.parse(copyOfCloudBackend.build[config.rootStackFileName]);
const resourceIdsInStack = Object.keys(lastDeployedStack.Resources);
for (const resourceId of resourceIdsInStack) {
if (stackMapping[resourceId] && 'root' !== stackMapping[resourceId]) {
missingStackMappings[resourceId] = 'root';
}
}
const outputIdsInStack = Object.keys(lastDeployedStack.Outputs || {});
for (const outputId of outputIdsInStack) {
if (stackMapping[outputId] && 'root' !== stackMapping[outputId]) {
missingStackMappings[outputId] = 'root';
}
}
// If there are missing stack mappings, we write them to disk.
if (Object.keys(missingStackMappings).length) {
let conf = await loadConfig(config.projectDirectory);
conf = { ...conf, StackMapping: { ...getOrDefault(conf, 'StackMapping', {}), ...missingStackMappings } };
await writeConfig(config.projectDirectory, conf);
}
}
}
return transformOutput;
}

Expected behavior

StackMapping definitions needs to be reflected in generated nested stack template even after the resources were once pushed.

Reproduction steps

  1. amplify add api
  2. set schema to
type Test @model {
  id: ID!
  name: String!
}
  1. add StackMapping to transform.cont.json
    "StackMapping": {
        "UpdateTestResolver": "CustomTestStack",
        "CreateTestResolver": "CustomTestStack",
        "ListTestResolver": "CustomTestStack",
        "GetTestResolver": "CustomTestStack"
    }
  1. run amplify push
  2. update transform.conf.json StackMapping to
    "StackMapping": {
        "UpdateTestResolver": "CustomTestStack",
        "CreateTestResolver": "CustomTestStack",
        "ListTestResolver": "UpdateTestStack",
        "GetTestResolver": "UpdateTestStack"
    } 
  1. running amplify api gql-compile or amplify push will revert the StackMapping content

GraphQL schema(s)

# Put schemas below this line

type Test @model { id: ID! name: String! }

Log output

# Put your logs below this line


Additional information

No response

@phani-srikar
Copy link
Contributor

I've tried to reproduce this using the latest 10.7.3 version of CLI and did not run into this issue. Using amplify api gql-compile successfully generates the newly added UpdateTestStack stack and removes the Get and List resolvers from the CustomTestStack stack. However, I ran into another issue that we're tracking where if we move the resolvers between 2 CFN stacks, it causes a race condition during deploy causing the stack updates to fail. Refer to this comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants