-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(type-safe-api): create a new asset with PrepareApiSpecOptions props and used it in PrepareApiSpecCustomResourceProperties instead or raw properties in order to avoid large payload that can cause the PrepareSpecCustomResource to fail #773
fix(type-safe-api): create a new asset with PrepareApiSpecOptions props and used it in PrepareApiSpecCustomResourceProperties instead or raw properties in order to avoid large payload that can cause the PrepareSpecCustomResource to fail #773
Conversation
…ps and used it in PrepareApiSpecCustomResourceProperties instead or raw properties in order to avoid large payload that can cause the PrepareSpecCustomResource to fail
Hi @agdimech, Here is a pull request, I don't have any more time at the moment to work on it, I will test it tonight (in ~12h in my time zone) I just pushed it if you want to take a look |
Sorry for the delay - just wanted to check in to see whether this is ready for review and has this been tested? |
I just tried but I got an error now when I run
|
Can you try downgrading your Node version to 18? |
Thanks, I downgraded my version to node@20 and will stay on that version from now. It now builds as expected. I followed the official documentation in order to use my local package and tried it with a production use case in a sandbox environment. By looking at assets in S3, we now have a new input-config.json asset generated that corresponds to all the prepareSpecOptions. The previous error related to the prepare spec resource has now been resolved. Previously, there was an error with the ApiPrepareSpecCustomResourceAF67685F resource:
However, I now have encountered a new error in the API. The new error on ApiCD79AAA0 is:
So, I tried to find differences between the output prepared spec and discovered that tokens weren't resolved. Indeed, by looking at the input-config.json asset, I found occurrences of the following throughout the file: "getCompanies": {
"integration": {
"type": "AWS_PROXY",
"httpMethod": "POST",
"uri": "arn:aws:apigateway:eu-west-1:lambda:path/2015-03-31/functions/${Token[TOKEN.2257]}/invocations",
"passthroughBehavior": "WHEN_NO_MATCH"
},
"methodAuthorizer": {
"authorizerId": "cognito-authorizer",
"authorizationScopes": [
"${Token[TOKEN.2208]}/companies.read"
]
}
}, I think that synchronously write the prepareSpecOptions is not the right way to achieve this |
Hi @agdimech, I hope you're doing well. I wanted to follow up on my previous comment. I've been diving into the issue and think I might've stumbled upon a workaround for token resolution at deploy-time. I've looked into the official AWS CDK documentation regarding S3 Deployment and it appears that S3 Deployment supports deploy-time values, which could be a good solution for our use case. I'm thinking we could tweak our approach a bit and play around with the deployment structure. Something like this perhaps:
and create a new This could help keep things neat and tidy. Before I dive deeper into this, I'd love to hear your take on it. Let me know if you think it aligns with what we're aiming for or if you have any other solution that keep current asset in default CDK asset bucket. |
Hi @valebedu, The BucketDeployment definitely looks like the way to go! I wasn't entirely sure about what the timestamp represented above (perhaps asset deployment instance?) although I am wandering, could we re-use the Asset bucket and do something like this instead?
|
…ead of asset NOTE: S3 deployment supports deploy-time values
Yes it's simpler to do that, it was just a proposition to move assets to separate buckets and to support history with a timestamp or datetime as a root folder. I've pushed changes and try to build but I got an error everytime time. I'm using node@20 and pnpm@8, I clean deps and really don't understand why I got an issue about deployment and didn't find any related issue on github. Any idea?
|
Hmm i'm not too sure - I triggered a build via CI and it all built successfully. Try deleting the |
In order to instanciate bucket deployment before prepare spec custom resource
I attempted with some changes, but now I encounter the same error with ApiOptionsDeploymentCustomResource859991B0:
Adding a bucket deployment with deploy-time values merely shifts the problem to the bucket deployment. It appears there is no effective solution for this issue at the moment. 😞 This suggests a virtual limit to the API Gateway size due to the Lambda payload limit, which constrains how large the API can be. |
Hi @valebedu , I had a chat with @cogwirrel on Friday and we may have found a solution (potentially). 1.) We could use a Trigger, instead of a custom resource to write the options into an S3 Bucket. The Trigger is able to call a Lamda at deployment time using a standard Request/Response interaction which allows for a 6Mb payload (128kb is max using custom resource). We could then pass the reference to the S3 bucket into the PrepareSpecHandler like we do in the other places. |
Hi @agdimech, Awesome! Because it's a sync lambda it as a payload limit to 6MB right? this is really good, I didn't understand why custom resource is made like this because the payload limit in async func can be a problem in some scenarios. So we can add a trigger to execute synchronously a lambda function perfect, but which function to call? we need a function similar to the one used in custom resource that resolve deploy time values, any idea on this one? |
Hi @valebedu, Yes, 6Mb will be the payload size which means an increase of ~50x larger payloads :) I'm pretty sure if you pass in the values to the lambda, cloudformation should handle the substitution for you? |
CloudFormation can indeed resolve !Ref !GetAtt etc, but it can't resolve "getCompanies": {
"integration": {
"type": "AWS_PROXY",
"httpMethod": "POST",
"uri": "arn:aws:apigateway:eu-west-1:lambda:path/2015-03-31/functions/${Token[TOKEN.2257]}/invocations",
"passthroughBehavior": "WHEN_NO_MATCH"
},
"methodAuthorizer": {
"authorizerId": "cognito-authorizer",
"authorizationScopes": [
"${Token[TOKEN.2208]}/companies.read"
]
}
}, That's why S3 deployment seams the right solution because it's specified in the documentation that deploy-time values are supported I think that to improve the process we should use a lambda similar to the one used in s3 deployment but synchronlously in order to raise lambda payload limit to 6MB but don't know hom. Maybe I should make a FR to aws-cdk ? |
How are you setting up the Trigger? I haven't use it before but based on the docs you should be able to do something like: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.triggers.Trigger.html#executeafter Then just set up all the deps in their and hopefully the Tokens should get resolved. |
I'm sorry but I have really no idea of how to replace I never used it, there is no way to define the input (spec location and prepare spec options) and no way to retrieve the output key |
So I did some more digging and you can actually configure a CustomResource to invoke the underlying lambda via an const prepareSpecCustomResource = new CustomResource(
this,
"PrepareSpecCustomResource",
{
serviceToken: provider.serviceToken,
properties: {
...prepareApiSpecCustomResourceProperties,
InvocationType:'RequestResponse'
},
}
); |
☁️ Nx Cloud ReportCI is running/has finished running commands for commit 3c20cf4. As they complete they will appear below. Click to see the status, the terminal output, and the build insights. 📂 See all runs for this CI Pipeline Execution ✅ Successfully ran 1 targetSent with 💌 from NxCloud. |
This pull request is now marked as stale because it hasn't seen activity for a while. Add a comment or it will be closed soon. If you wish to exclude this issue from being marked as stale, add the "backlog" label. |
Closing this pull request as it hasn't seen activity for a while. Please add a comment @mentioning a maintainer to reopen. If you wish to exclude this issue from being marked as stale, add the "backlog" label. |
Fixes #771