Sharing Bulkhead policy capacity across multiple HttpClients from HttpClientFactory
See original GitHub issueSummary: What are you wanting to achieve? This might be a simple question but… I am trying to limit the amount of concurrent outbound requests using multiple typed Http clients in .Net Core 2.2. It is not the amount of threads/cpu usage etc I am trying to limit, the goal is to prevent reaching the outgoing connections limit in Azure Webapps. The API itself can under heavy load make a very high amount of outgoing requests at the same time, limiting connection reuse.
What code or approach do you have so far?
I have looked at the Bulkhead policy and I want to check if that is the correct usage of that policy.
For example in Startup.cs:
var bulkheadPolicy = Policy.BulkheadAsync<HttpResponseMessage>(1000, int.MaxValue);
services.AddHttpClient<ITestService1, TestService1>(client =>
{
client.BaseAddress = new Uri("example1");
})
.AddPolicyHandler(bulkheadPolicy)
.AddTransientHttpErrorPolicy(p => p.WaitAndRetryAsync(3, _ => TimeSpan.FromMilliseconds(300)));
services.AddHttpClient<ITestService2, TestService2>(client =>
{
client.BaseAddress = new Uri("example2");
})
.AddPolicyHandler(bulkheadPolicy)
.AddTransientHttpErrorPolicy(p => p.WaitAndRetryAsync(3, _ => TimeSpan.FromMilliseconds(300)));
Will this configuration limit the concurrent outgoing calls to 1000 for both HttpClients combined? And is it the correct order (bulkhead first)?
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (1 by maintainers)
Top GitHub Comments
Hi all. Yes, Polly encompasses resilience strategies beyond pure fault-handling for a while now. Polly’s wiki page on fault-handling vs proactive resilience engineering discusses and slots each policy into this broader context.
@emilssonn Yes, a bulkhead policy is an extremely simple within-process parallelism throttle. Note: bulkhead policy, being based on
SemaphoreSlim
, will clearly operate as a per-VM-instance throttle only; there is no distributed/shared state across VM instances. If your App Service plan is such that your outgoing connections limit will ever be shared across VMs, bulkhead policy as-now will not be sufficient to govern this across VMs.Notwithstanding the above caveat, I’ll answer the questions about bulkhead policy scoping and sequencing with HttpClientFactory, for completeness:
@emilssonn Your scoping of the
bulkheadPolicy
instance in the original post is correct for the goal stated about typed clients: to govern the separate call-streams viaTestService1
andTestService2
within the same overall parallelism limit, share the same bulkhead policy instance across both, as you have done, and as we also discuss briefly in the HttpClientFactory doco here and here.However, you want to sequence the bulkhead and the retry in the other order. You should configure:
See the policy sequencing recommendations within PolicyWrap for explanation. You don’t (I would assume) want any of the bulkhead’s capacity occupied by waiting for the next try; you want all the bulkhead’s capacity dedicated to placing outbound calls. So the bulkhead should be ‘inside’ the wait-and-retry, only governing the downstream call, not the waits. For more on how policy configuration order on HttpClientFactory translates into execution order/wrapping, see our diagrams in the doco.
@reisenberger and @RichardHowells thank you for the help!
It looks like the bulkhead policy will work for me.