-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A recommended way to split up large APIs? #593
Comments
Hi, thanks for the report. AppSync can generate a lot of resources. And the usage of pipeline resolvers can make it worst, indeed. |
Thanks for the reply. We have been using the split stack plugin for years already. We are in fact hitting another limit, where you cannot update more than 2500 resources in one go. This is very subtly documented in the CloudFormation docs, and it is confirmed by our connections through AWS Solution Architects. This number is counting resources in all nested-stacks created by the split-stacks plugin. |
In In the other hand, our 800 resolvers with a rough multipliers of 3x translates to ~2400 resources, it's somehow reaching the 2500 limits on its own. It would be really nice if |
In the Serverless Framework Slack channel, they suggest splitting up the API into multiple services with Serverless Framework Compose. We then have to use stack outputs as inter-service dependency, attaching AppSync resolvers from child services to avoid the 2500 limit. Our resolvers are really thin layers of CRUD operations, even with a number of 800 it is not a large API. We specifically keep our graph relatively flat, in order to have the whole product stay thin. With all things considered, we are aware of the hint about splitting up services when it gets too large. Is it possible to add back the support of reusing an existing AppSync API in V2? EDIT: Link to the Slack thread https://serverless-contrib.slack.com/archives/CA4QT5VU3/p1681202794224769 |
Our current workaround is to deploy multiple times like this,
During step 2 and 3, since CloudFormation doesn't see all resolvers, it will delete existing resolvers from our current stack. This multi-stage deploy inevitably creates some downtime in contrast to our zero-downtime commitment, which is suboptimal. I honestly hope we can attach/update resolvers to an existing AppSync API, such that our CI/CD goes without downtime. |
Since mid-2023, you can also alleviate CloudFormation resource pressure by using AppSync's "Merged API" feature. I talk about how to do so here, if you're interested. This works orthogonally with split-stacks, so you have another dimension to play with. |
Hey, how'd doing? We have upgraded to the new major for some times now, and it's working great until now.
In a recent move, we ported 400 Lambda resolvers into APPSYNC_JS resolvers to avoid cold starts. We subsequently hit a CloudFormation limit of 2500 resource updates per stack operation because pipeline resolvers means 3x more resources count than Direct Lambda.
An extraction of the relevant error message below:
Our AWS support ticket and a related GitHub issue suggests splitting it into multiple batches of
UpdateStack
actions.Without a way to specify an existing AppSync API in v2, what would you think is the best way to resolve our issue?
The text was updated successfully, but these errors were encountered: