question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Potential errors when computing exponential backoff and linear backoff

See original GitHub issue

The code to compute the linear backoff and exponential backoff has some issues, as it does not wait in the first attempt, this is because currentRetryAttempt is initially set to 0 instead of 1. See https://github.com/JustinBeckwith/retry-axios/blob/c2843108e7d9758097cd1af89e09cd78c99bcebf/src/index.ts#L183-L188.

It also does not consider the retryDelay, which according to the readme:

Milliseconds to delay at first. Defaults to 100.

Additionally, the formula for computing the exponential backoff should be something like:

Math.min((Math.pow(2, config.currentRetryAttempt) + Math.random()), MAX_DELAY) * 1000

See https://cloud.google.com/iot/docs/how-tos/exponential-backoff#example_algorithm

That would give retry times like:

1.234
2.314
4.012

Compare that to the times obtained with the current algorithm:

0
0.5
1.5

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:3
  • Comments:8 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
JustinBeckwithcommented, Dec 16, 2020

Loud and clear. Sorry if I was crass - I catch a lot of flack in issue trackers, and appreciate the clarification! I completely understand what you mean now, and apologize for being short.

On the issue itself - totally understand what folks are saying. What I was trying to get across is that I don’t believe there are tests which dig into the specific timing of the retries. I’d like to avoid having a patch floated that “fixes” the issue without having a fairly in-depth suite of tests that specifically cover backoff expectations. If someone submitted a fix today for this, it’s likely all the tests we have in place would just pass with no changes. After that, it’s very likely that the next patch breaks it (or you know, I accidentally break it).

Thanks for bearing with my being grumpy.

1reaction
jgehrckecommented, Dec 16, 2020

Saying “have you read this” is kind of being a turd

Thanks for the feedback. I am sorry man, really didn’t want to come across like that.

My addition to this ticket was meant in a neutral-friendly way, therefore also the “😃”. It certainly was meant to be a productive contribution: it really appeared to me (based on the communication in here so far) as if you might have missed a specific problem description – the one I have quoted.

Missing something happens to all of us. All I wanted is to ask and make sure that it’s not just a simple misunderstanding. I got the impression that you missed it because you didn’t comment on it and also suggested that you may not see/understand the specific problem(s) reported. Based on your “so I can really understand”. Again: no blame, no stress – all of this easily happens in a ticket like this (also because this bug report mixes two issues and does not have a precise title); and I wanted to help us align on a problem and/or acknowledge a problem description (which is one of the most important parts in my opinion for inviting contributors: define a problem to be solved rather well – together).

Read more comments on GitHub >

github_iconTop Results From Across the Web

Exponential backoff - Wikipedia
Exponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable...
Read more >
Understanding Retry Pattern With Exponential Back-Off and ...
Join For Free. In this article, we will discuss the importance of the retry pattern and how to implement it effectively in our...
Read more >
Timeouts, retries and backoff with jitter - AWS
The most common pattern is an exponential backoff, where the wait time is increased exponentially after every attempt. Exponential backoff can lead to...
Read more >
Better Retries with Exponential Backoff and Jitter - Baeldung
In a distributed system, network communication among the numerous components can fail anytime. Client applications deal with these failures ...
Read more >
A Brief Summary of Retry Strategies - Tom Walton
... compared with using a simple linear backoff strategy. Then we will see how adding jitter to the exponential backoff strategy can solve ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found