Adding a jitter to the backoff when an aws request gets throttled.
See original GitHub issueFeature Proposal
Add exponential backoff instead of always 5 seconds.
Description
We are deploying quite often to our test AWS account. In that I noticed that we are hitting cloudformation api throttling
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
...Serverless: Recoverable error occurred (Rate exceeded), sleeping for 5 seconds. Try 1 of 4
Serverless: Recoverable error occurred (Rate exceeded), sleeping for 5 seconds. Try 1 of 4
Serverless: Recoverable error occurred (Rate exceeded), sleeping for 5 seconds. Try 2 of 4
Serverless: Recoverable error occurred (Rate exceeded), sleeping for 5 seconds. Try 3 of 4
Serverless: Recoverable error occurred (Rate exceeded), sleeping for 5 seconds. Try 4 of 4
This is at least what I am expecting that is happending. The issue is that all my stacks will now retrying in exactly 5 seconds causing them eventually to fail.
I think there should be some kind of jitter added in order to have a higher chance of success.
Issue Analytics
- State:
- Created 4 years ago
- Comments:18 (18 by maintainers)
Top Results From Across the Web
Timeouts, retries and backoff with jitter
When adding jitter to scheduled work, we do not select the jitter on each host randomly. Instead, we use a consistent method that...
Read more >Exponential Backoff And Jitter | AWS Architecture Blog
Adding Backoff Capped exponential backoff means that clients multiply their backoff by a constant after each attempt, up to some maximum value. ......
Read more >Error retries and exponential backoff in AWS
Most exponential backoff algorithms use jitter (randomized delay) to prevent successive collisions. Because you aren't trying to avoid such collisions in these ...
Read more >Managing and monitoring API throttling in your workloads
Most exponential backoff algorithms use jitter to prevent successive collisions.
Read more >Resolve API throttling or "Rate Exceeded" errors in Elastic ...
Use error retries, exponential backoffs, and jitter to help limit the rate of API calls. While each AWS SDK implements automatic retry logic...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
No, I want to be sure, we come with right solution, and not just take-in the code that we just assume that may improve situation.
Adding a jitter in my understanding makes sense, when we deal with situation where AWS requests are issued at very same time (with at least 0.1s accuracy) - and that in my understanding is hard to achieve even by issuing SLS deploys automatically at very same time (there’s a lot going on in between issued AWS requests in a process, so naturally AWS requests should rather go out of sync among deploys)
Additionally above your stating that deploys potentially happen at same time. Can you elaborate on “potentially”? Are they manually released? If that’s the case, you already naturally run AWS requests with unpredictable pattern, and that should make then out of sync.
Is this then to address accidental, random cases (which may occur occasionally) where two concurrent deploys start to hit AWS SDK at about same time? As then, yes, in such scenarios, jitter can help to de-sync situation.
I added support for the environment variable to change the frequency
SLS_AWS_MONITORING_FREQUENCY
. I was debating whether to call itInterval
orFrequency
, but since its already has afrequency
options argument I went with this one.I also made the calculation for the backoff simpler and provided a rounded number log.