question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Expose current retry count in the context of the function

See original GitHub issue

Per the new retry feature rolling out, one scenario that is desired is the ability to deadletter / capture something that is on its final retry. For example, if I have an event hub triggered function and define a retry policy of 5, on the 5th retry if it fails I want to catch that failure and store it in a deadletter queue or something so I can go inspect it later. However, today the context for the current retry count isn’t surfaced directly, and using something like a local variable to increment may be tricky when multiple executions could be executing / retrying on the same host.

Proposal is that there is a new context passed into the ExecutionContext that includes information on the retry.

try
{
}
catch()
{
   if(context.retries.count >= context.retries.max)
   { 
     // deadletter
   }
   throw exception;
}

// @pragnagopa @fabiocav @mathewc

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:7
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
jeffhollancommented, Nov 4, 2020

Moving comment from @casper-79 here: https://github.com/Azure/azure-functions-java-library/issues/132#issuecomment-721812596

I have tested the retry functionality today and seen the exponential retry strategy in action. I am also seeing some very strange behaviour, however. As I understand the documentation the retry strategy is implemented on the function instance itself, rather than storing the delivery state on the queue. I am seeing what I believe is side effects of this approach. My experiments center around submitting poisonous messages that will always fail onto a queue consumed by a Java azure function. The function uses a retry strategy defined in host.json as seen below:

“retry”:{ “strategy”:“exponentialBackoff”, “maxRetryCount”:6, “minimumInterval”:“00:00:10”, “maximumInterval”:“00:05:00” } (1) Processing of poisonous messages does not always show up in the Application Insights and the “monitor” section of Azure functions. When I use the Azure portal to peek look at test messages I can tell DeliveryCount has gone up by 1, but more often than not there is no trace of the failed execution that increased the counter.

(2) Azure function instances are short lived, thereby affecting the useful range of the parameters in the retry configuration parameters. Can you provide guidance on what will work in practice? I am guessing you will run into problems if you set maximumInterval to 24 hours and retryCount to 30 in my host.json?

(3) What is the recommended approach for dead lettering? The only solution I can think of is to set maxDeliveryCount=1 on the queue, but this will only work if all retry attempts of your strategy can be be performed within the typical lifetime of an instance. Otherwise, I guess the message will be retried for ever.

0reactions
LockpickingDevcommented, Apr 29, 2021

All we’re doing is taking in a JSON request from an external source and storing it. Theres really not much to it. After reading, it sounds like that’s not in-proc though.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Azure spring boot function - get current retry count
question is how do we get current retry count? I need to copy message to failed queue after 3 unsuccessful retry, could not...
Read more >
Working with Polly – Using the Context to Obtain the Retry ...
In this post, we'll explore a use for the Polly Context object to share data between our code and the execution of a...
Read more >
Retry guidance for Azure services
If the specified retry count is exceeded, the results are wrapped in a new exception. It doesn't bubble up the current exception. Policy ......
Read more >
Handling errors in Durable Functions (Azure Functions)
In this article. Errors in activity functions; Automatic retry on failure; Custom retry handlers; Function timeouts; Unhandled exceptions; Next ...
Read more >
Test Retries | Cypress Documentation
With test retries, Cypress is able to retry failed tests to help reduce test flakiness and continuous ... currentRetry returns the current test...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found