Rate exceeded
See original GitHub issueDescription
I have 8 times ${cf:…} in my serverless.yaml and I get 3 out of 5 times “Rate exceeded” while running serverless. I suppose I’ve reached a limit for cloudformation’s API calls (DescribeStack for instance). Is there any chance to avoid this error except of increase my limits? Why doesn’t serverless calls the api only once for all stacks? Or at least only once per stack?
Last but not least: Which limit do I reach? I don’t know which one mentiond on aws limits
For bug reports:
- What went wrong? I run “serverless deploy -v” and I get an error “Rate exceeded”
- What did you expect should have happened? deploying without that error
- What was the config you used?
custom:
stage: ${cf:StackA.StagePrefix}
vpcStackName: ${cf:StackA.VpcStackName}
topicGeneral: ${cf:StackB.In}
topicBs: ${cf:StackC.In}
dnsName: ${cf:StackD.LoadBalancerDNSName}
securityGroupIds: ${cf:StackD.AlbSecurityGroup}
privateSubnet1: ${cf:StackE.PrivateSubnet1}
privateSubnet2: ${cf:StackE.PrivateSubnet2}
- What stacktrace or error message from your provider did you see?
> sls package
Serverless Error ---------------------------------------
Rate exceeded
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Forums: forum.serverless.com
Chat: gitter.im/serverless/serverless
Your Environment Information -----------------------------
OS: linux
Node Version: 6.9.1
Serverless Version: 1.14.0
For feature proposals:
- What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us.
- If there is additional config how would it look
Similar or dependent issues:
Additional Data
- Serverless Framework Version you’re using: 1.14.0
- Operating System: Fedora / Linux
- Stack Trace: Rate exceeded
- Provider Error messages:
Issue Analytics
- State:
- Created 6 years ago
- Reactions:8
- Comments:60 (32 by maintainers)
Top Results From Across the Web
Find the AWS API call causing "Rate exceeded" errors
To determine which API call is causing a Rate exceeded error in your AWS account, do the following: 1. Create an Amazon Athena...
Read more >Cannot create new AWS IAM role. Getting "Rate exceeded" error
Here is short cURL request of creating new IAM role. curl 'https://console.aws.amazon.com/iam/api/roles' \ -H 'Connection: keep-alive ...
Read more >AWS API throttling rate exceeded | Resolved - Bobcares
How to solve AWS API throttling rate exceeded error? Check with API calls are throttled. Check “describe” calls in the Elastic Beanstalk ...
Read more >How can I find which API call is causing the "Rate exceeded ...
How can I find which API call is causing the " Rate exceeded " error? 1.9K views · 1 year ago ...more. Amazon...
Read more >AWS Lambda Rate Limits (Rate Exceeded) - Cobalt Intelligence
Handling AWS Lambda rate limits when it throws a rate exceeded error that's not related to your API gateway.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Ok. Then this is a severe bug in the variable resolution part of Serverless. Upon invocation of the lifecycles it MUST be guaranteed that the whole serverless.yml file is resolved with each of its variables that it contains as well as any references that there may be.
@horike37 @pmuens @RafalWilinski We should open a separate issue for that with high priority. https://github.com/serverless/serverless/issues/3821#issuecomment-336169219 shows clearly that there is a bug. We have to make sure that the variable resolution finishes (and SLS waits for that) and only then continues to start the command lifecycles.
@pmuens I think it is time to do the serialized requests now. @gozup 's issue clearly shows that this will be the only way to eliminate the problem forever. Every other approach that still allows to break the limit of parallel AWS REST API accesses is not really a solution 😃 but merely a workaround that will make the problem more obscure with each PR.
I think the right place would be the API request method itself, as it is used centrally from every location of the framework. It could be handled by using a promised queue which just queues submitted method call promises (I think a proper module was
BbQueue
)UPDATE: It is bluebird-queue (https://www.npmjs.com/package/bluebird-queue). This even allows to set a concurrency limit, so that we can go straight to the limit without exceeding it.