Distribute cache concurrency issue
See original GitHub issueHi @stefanprodan , When deploying a lots of api services(use distributed cache to store counter) behind load balance, every instance need to get conuter and use process’ lock to increse count, as the code below
using (await AsyncLock.WriterLockAsync(counterId).ConfigureAwait(false))
{
var entry = await _counterStore.GetAsync(counterId, cancellationToken);
if (entry.HasValue)
{
// entry has not expired
if (entry.Value.Timestamp + rule.PeriodTimespan.Value >= DateTime.UtcNow)
{
// increment request count
var totalCount = entry.Value.Count + _config.RateIncrementer?.Invoke() ?? 1;
// deep copy
counter = new RateLimitCounter
{
Timestamp = entry.Value.Timestamp,
Count = totalCount
};
}
}
// stores: id (string) - timestamp (datetime) - total_requests (long)
await _counterStore.SetAsync(counterId, counter, rule.PeriodTimespan.Value, cancellationToken);
}
I think it has concurrency issues, counter will not work correctly when a lots of requests come in. Every instance’s counter count by themself then save to the cache store, the last one will cover the previous one.
Why not use Token Bucket algorithm to achieve this feature.
Issue Analytics
- State:
- Created 4 years ago
- Comments:34 (4 by maintainers)
Top Results From Across the Web
Dealing with concurrency issues when caching for high- ...
To prevent this, first you have to set soft and hard expiration date. Lets say the hard expiration date is 1 day, and...
Read more >Concurrency control in distributed caching
Concurrency control deals with issues involved with allowing multiple end users simultaneous access to shared entities, such as objects or data records.
Read more >Design | How high concurrency is handled in cache?
Recently in one of the interviews, I was asked how a cache(we are discussing redis) handles thousands of requests (both READ and WRITE)...
Read more >Distributed Caching — The Only Guide You'll Ever Need
This write-up is an in-depth guide on Distributed Cache. It does cover all the frequently asked questions about it such as What is...
Read more >How to handle concurrent updates for the same record in a ...
Generate some data based on the message payload; Cache the data on Redis; Send the data to another service. My issue is when...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hey @stefanprodan, what do you think to the changes proposed by @simonhaines? I’m wanting to use this package but holding off until the concurrency issues are sorted.
The
IDistributedCache
service used to implement the distributed rate-limiting counter does not provide enough concurrency guarantees to resolve this race condition, and it likely never will. An atomic increment operation is needed, such as Redis’ INCR command.We resolved this issue by refactoring the
IRateLimitCounterStore
and backing it with a Redis cache, see the repo here. This also reduces per-request latency by eliminating the read/update/write operations that are the core of this issue (see here).For each rate limit rule, time is divided into intervals that are the length of the rule’s period. Requests are resolved to an interval, and the interval’s counter is incremented. This is a slight change in semantics to the original implementation, but works in our use-case.
This approach requires a dependency on
StackExchange.Redis
to access the INCR command and key expiry, and theIConnectionMultiplexer
service needs to be injected at startup.